initMgpu already does build gl formats so remove the temp creation of
allocator and renderer in onReady() that only built glformats,
also if guard the creation of renderers and allocators to make it less
likely to recreate one that is already existing.
After a VT switch or sleep/resume, the kernel discards all DRM property
blobs. restoreAfterVT() rebuilds the atomic commit with the in-memory
HDR metadata, but prepareConnector() skips blob creation because
STATE.committed (the "what changed" bitmask) was cleared after the last
pre-sleep commit.
During a modeset the kernel state is fully reset, so all properties must
be re-sent regardless of the committed flags. Add a data.modeset guard
to bypass the committed check, matching how max_bpc, colorspace, and VRR
are already handled unconditionally.
The existing has_value() check prevents false positives: non-HDR
modesets (resolution changes etc.) don't populate data.hdrMetadata, so
the block is still skipped.
Currently aquamarine initializes all the secondary renderers and keeps them alive all the time, even when no output on that GPU is enabled. Keeping a secondary renderer alive prevents unused GPU from powering down completely, which e.g. makes battery life significantly worse on Optimus laptops with DP/HDMI ports wired to dGPU.
This patch initializes secondary renderers only when at least one connected output is enabled, and deinitializes them when no enabled outputs remain.
Some high-resolution displays (e.g. Apple Studio Display 5K) expose
multiple DRM connectors in a tile group. However, when the driver
handles tiling internally, a single connector will advertise the full
panel resolution, and its other connectors will be inoperative and
redundant.
To avoid ghost outputs from those dead tiles, we can filter out any
tiles where there is a connector in that group that offers the full
resolution by itself.
* atomic: properly check min max bpc values
properly use maxBpcBounds[0] and clamp it in getMaxBpc. lets not go
below or beyond what HW supports.
* atomic: actually set blobid in prepareGammaBlob
set blobid to zero and not the local copied pointer.
* atomic: add hdr blob to apply and rollback aswell
add missing hdr blob to apply and rollback.
* drm: dont leak modeinfo
free currentModeInfo when done with it.
EGL_CONTEXT_RELEASE_BEHAVIOR_KHR determines what happends with implicit
flushes when context changes, on multigpu scenario we change context
frequently when blitting content. while we still rely on explicit sync
fences, the flush is destroying driver optimisations.
setting it to EGL_CONTEXT_RELEASE_BEHAVIOR_NONE_KHR essentially mean
just swap context and continue processing on the next context.
* backend: read idle timer
the manpage for timerfd_create and read states this.
timefd_create creates a new timer object, and returns a file descriptor
that can be used to read the number of expirations that have occurred.
The FD becomes readable when the timer expires.
read removes the “readable” state from the FD.
so most likely it has somewhat worked because of the updateIdleTimer()
function.
* backend: check if fd is readable log otherwise
ensure we dont accidently trigger some blocking read call on a non
readable fd, log an error if this occurs, else read it.
FreeBSD always returning symbolic links '/dev/dri/card*' pointed to '/dev/drm/*',
so output device will never be found if `AQ_DRM_DEVICES` is set as explicit devices
path is converted to canonical file path.
Originally disconnected monitors were not removed from hyprland
correctly and duplications were created when an external monitor was
reattached leading to invalid behavior when switching
(empty desktop is visible).
Now removed monitors are explicitly disconnected during connectors
scanning.
evdi/displaylink devices doesnt give us a rendernode at all, and asahi
was hard to associate if a rendernode was related to the
displaynode/card so we fallback to just first found if that occurs.
might be more appropiate to figure out a proper way to deal with these
cases then workaround ever single driver that doesnt give rendernodes
but so far seems only to be evdi.
problem that occured is a 3 gpu situation, the evdi card ended up using
a rendernode for a complete different gpu because it fallbacked to that.
if the backends now gets a rendernode it wont call the authmagic on the
displaynode making dumb buffer creation failing on certain drivers. so
reopen it again in the creation of the CDRMDumbAllocator.
* renderer: use rendernode if available
init the CDRMRenderer on the rendernode if available otherwise fallback
to the displaynode.
* backend: use rendernode if available
use rendernode if available for the backends, or fallback to display
node. make cdrmrenderer use backend fd.
so nvidia as main gpu cant create linear modifiers and will give us a null
bo if forced to linear, meanwhile without linear modifiers blitting
fallbacks to cpu copying that is slow. so force linear mods, try create
bo and if all else fails try again without. this way intel/amd as main
and nvidia as dgpu will create linears and if nvidia is main it will be
using cpu copying and still work but a bit slow.
hide all of this behind AQ_FORCE_LINEAR_BLIT env var. since its a bit
hackish.
* core: use -Wpedantic and fix warnings
dont use anonymous structs, iso c++ forbids it. compound literals is a
c99 thing use designated c++20 initializing instead.
* flake.lock: update
* core: rename .bits to .values
make the name more appropiate to its usage.
---------
Co-authored-by: Mihai Fufezan <mihai@fufexan.net>
* feat: only do modeset when the mode is different from the current one
* style: remove braces around 1-line if
* fmt: run clang-format
* drm: add reset function back and remove reset from start
* drm: fix dpms on fastboot
* fmt: run clang-format
* Update Atomic.cpp