The visualSelectGroup wasn't getting set (since our DRI drivers don't
use it), and and since it's the top priority in the sort order, you
got random sorting of your visuals unless malloc really returned you
new memory. This manifested as Xephyr -glamor rendering to a
multisampled window on my system, which as you might guess was
slightly lower performance than expected.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
It seems the alanyzer can't comprehend dixSetPrivate().
quartz.c:119:12: warning: Potential leak of memory pointed to by 'displayInfo'
return quartzProcs->AddScreen(index, pScreen);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit 64327226dd)
Avoids potential memory corruption from bad requests
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit a03f096a85)
Return an error to the caller rather than crashing the server on
invalid screens.
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit b3572c0d1a)
x-hook.c:96:9: warning: Called function pointer is an uninitalized pointer value
(*fun[i])(arg, data[i]);
^~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit 959e8f23af)
X11Controller.m:938:1: warning: method 'applicationWillTerminate:' could be declared with attribute 'noreturn'
[-Wmissing-noreturn,Semantic Issue]
{
^
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit f79af19417)
./darwinfb.h:28:9: warning: '_DARWIN_FB_H' is used as a header guard here, followed by #define of a different macro
[-Wheader-guard,Lexical or Preprocessor Issue]
^~~~~~~~~~~~
./darwinfb.h:29:9: note: '_DARWIN_DB_H' is defined here; did you mean '_DARWIN_FB_H'? [Lexical or Preprocessor Issue]
^~~~~~~~~~~~
_DARWIN_FB_H
Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit 2e3ebec952)
ProcessInputEvent() resets the device idle times. If idle time was higher than
the lower bracket, this should trigger an event in the idle time wakeup
handler.
If processing is slow, the idle time may advance past the lower bracket
between the reset and the time the BlockHandler is called. In that case, we'd
never schedule a wakeup to handle the event, causing us to randomly miss
events.
Ran tests with a neg transition trigger on 5ms with 200 repeats of the test
and it succeeded. Anything below that gets a bit tricky to make sure the
server sees the same idle time as the client usleeps for.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit f36f5a65f6)
This is going to be exposed (and not the old entrypoint) for some DRI
drivers once the megadrivers series lands, and the plan is to
eventually transition all drivers to that. Hopefully this is
unobtrusive enough to merge to stable X servers so that they can be
compatible with new Mesa versions.
v2: typo fix in the comment
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Consider all attached output providers when looking for changed outputs and
crtcs.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Michal Srb <msrb@suse.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 85ae44f07f)
Send RRResourceChangeNotify event when provider, output or crtc was created or
destroyed. I.e. when the list of resources returned by RRGetScreenResources and
RRGetProviders changes.
[airlied: move structure member to avoid ABI breakage]
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Michal Srb <msrb@suse.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit a9ca93dcf9)
Send RRProviderChangeNotify event when a provider becomes output source or
offload sink.
[airlied: unbreak the ABI for 1.14 backport.]
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Michal Srb <msrb@suse.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 4b39ea8a91)
When we disconnect an output/offload slave set the changed bits,
so a later TellChanged can do something.
Then when we remove a GPU slave device, sent change notification
to the protocol screen.
This allows hot unplugged USB devices to disappear in clients.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit 9d26e8eaf5)
We don't want to know about changes on the non-protocol screen,
we will fix up setchanged to make sure non-protocol screens update
the protocol screens when they have a change.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit e233bda3e7)
When SetChanged is called we now modify the main protocol screen,
not the the gpu screen. Since changed stuff should work at the protocol level.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit b3f70f38ed)
Introduce a wrapper interface so we can fix things up for multi-gpu
situations later.
This just introduces the API for now.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit f9c8248b83)
Commit 8f4640bdb9 fixed a bit of a
chicken-and-egg problem by detaching GPU screens when their providers
are destroyed, which happens before CloseScreen is called. However,
this created a new problem: the GPU screen tears down its RandR crtc
objects during CloseScreen and if one of them is active, it tries to
detach the scanout pixmap then. This crashes because
RRCrtcDetachScanoutPixmap tries to get the master screen's screen
pixmap, but crtc->pScreen->current_master is already NULL at that
point.
It doesn't make sense for an unbound GPU screen to still be scanning
out its former master screen's pixmap, so detach them first when the
provider is destroyed.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit bdd1e22cbd)
As of server 1.13, systems with DRM and Udev will have BUS_PLATFORM as
their primary bus type. However, drivers not implementing a
platformProbe function will still create entities of type BUS_PCI. We
need to account for this when checking for the primary entity.
Signed-off-by: Connor Behan <connor.behan@gmail.com>
Acked-by: Tormod Volden <debian.tormod@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Consider all attached output providers when looking for changed outputs and
crtcs.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Michal Srb <msrb@suse.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Send RRResourceChangeNotify event when provider, output or crtc was created or
destroyed. I.e. when the list of resources returned by RRGetScreenResources and
RRGetProviders changes.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Michal Srb <msrb@suse.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Send RRProviderChangeNotify event when a provider becomes output source or
offload sink.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Michal Srb <msrb@suse.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
When we disconnect an output/offload slave set the changed bits,
so a later TellChanged can do something.
Then when we remove a GPU slave device, sent change notification
to the protocol screen.
This allows hot unplugged USB devices to disappear in clients.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We don't want to know about changes on the non-protocol screen,
we will fix up setchanged to make sure non-protocol screens update
the protocol screens when they have a change.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When SetChanged is called we now modify the main protocol screen,
not the the gpu screen. Since changed stuff should work at the protocol level.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Introduce a wrapper interface so we can fix things up for multi-gpu
situations later.
This just introduces the API for now.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Commit 8f4640bdb9 fixed a bit of a
chicken-and-egg problem by detaching GPU screens when their providers
are destroyed, which happens before CloseScreen is called. However,
this created a new problem: the GPU screen tears down its RandR crtc
objects during CloseScreen and if one of them is active, it tries to
detach the scanout pixmap then. This crashes because
RRCrtcDetachScanoutPixmap tries to get the master screen's screen
pixmap, but crtc->pScreen->current_master is already NULL at that
point.
It doesn't make sense for an unbound GPU screen to still be scanning
out its former master screen's pixmap, so detach them first when the
provider is destroyed.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>