This also avoids a deadlock when calling activateX: before the server
thread has initialized
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit cb6a32da27)
Check for identifier first and bail if it's missing (also remove the current
identifier check after we've already bailed due to missing identifiers)
If a driver is missing, warn but also say that we may have added this device
already. I see too many bugreports with incorrectly shortened log files.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Cyril Brulebois <kibi@debian.org>
(cherry picked from commit 75953ccb9e)
Compiler warning:
xinput.c:272: warning: dereferencing pointer 'e' does break strict-aliasing
rules
The code itself is the usual XInput client-side code:
XEvent event;
XDeviceMotionEvent *e = (XDeviceMotionEvent *)&event;
XNextEvent(display, &event);
printf("%d\n", e->type);
Since XDeviceMotionEvent is not guaranteed the same size as XEvent, clients
must use pointer aliasing as above when using the XNextEvent API. Disable
strict aliasing for this example.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Gaetan Nadon <memsize@videotron.ca>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Gaetan Nadon <memsize@videotron.ca>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 3aca819940)
which was added in commit:
dmx: Build fix for -Werror=implicit-function-declaration
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Gaetan Nadon <memsize@videotron.ca>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 2c1d0a539c)
The only kdrive server we probably care about anymore is Xephyr,
and this screen enable/disable code totally breaks it in multi-screen mode.
When you are in one screen the other stops updating.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=757457
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
(cherry picked from commit 98c4a888a4)
If the pGCPriv->flags == 2, then we try to assign the freed pGCPriv->XAAOps
avoid this by clearing the flags in to be destroyed pGCPriv.
Reported by coverity.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 1049139499)
So on RHEL5 anaconda sets an xorg.conf with a fixed 800x600 mode in it,
we run radeonfb and fbdev since ati won't work in userspace due to domain
issues in the older codebase.
On certain pseries blades the built-in KVM can't accept an 800x600-43 mode,
it requires the 800x600-60 mode, so we have to have the kernel radeonfb
driver reject the 800x600-43 mode when it sees it. However then fbdev
doesn't try any of the other 800x600 modes in the modelist, and we end up
getting a default 640x480 mode we don't want.
This patch changes the mode validation loop to continue on with the other modes
that match to find one that works.
v2: move code around to avoid extra loop, after comment from Jamey.
v3: move loop setup back into loop as per Jeremy's review.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Jamey Sharp <jamey@minilop.net>
Reviewed-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 22605effd1)
Fixes Sun cc warning that was recently elevated to error by the
stricter default CFLAGS changes to xorg-macros:
"loadmod.c", line 914: improper pointer/integer combination: op "<"
Should have been changed when commit ab7f057ce9 changed the
LoaderOpen return type from int to void *.
Changes log message when file is found but dlopen() fails from:
(EE) LoadModule: Module dbe does not have a dbeModuleData data object.
(EE) Failed to load module "dbe" (invalid module, 0)
to:
(EE) Failed to load module "dbe" (loader failed, 7)
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
(cherry picked from commit e4dcf580f0)
Commit f9e3a2955d removing the MAXSCREEN limit left the screen
number too unlimited, and allowed any positive int for a screen number:
Xvfb :1 -screen 2147483647 1024x1024x8
Fatal server error:
Not enough memory for screen 2147483647
Found by Parfait 0.3.7:
Error: Integer overflow (CWE 190)
Integer parameter of memory allocation function realloc() may overflow due to multiplication with constant value 1112
at line 293 of hw/vfb/InitOutput.c in function 'ddxProcessArgument'.
Since the X11 connection setup only has a CARD8 for number of SCREENS,
limit to 255 screens, which is also low enough to avoid overflow on the
sizeof(*vfbScreens) * (screenNum + 1) calculation for realloc.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Jamey Sharp <jamey@minilop.net>
(cherry picked from commit feebf67463)
xserver's VESA driver's VBE (Vesa BIOS Extensions) code
includes a PanelID probe, which can get a monitor's native
resolution. From this, using CVT formulas, it derives
horizontal sync rate and a vertical refresh rate ranges.
It however, only derives the upper bounds of the ranges, and
the lower bounds cannot de derived. By default, they are set
to hardcoded constants which represent the lowest supported
resolution: 640x480. The constants in vbe.c however, were
not actually derived from forulas, but carried over from
other code from the bad old days, and are not relevant
to flat panel displays. This caused, for example, EEEPC701's
panel, with a native resolution of 800x480, to end up with
a upper bound of the horizontal sync rate that was lower
than the hardcoded lower bound, which of course broke things.
These numbers have been rederived using both my own CVT tool
based on xf86CVTMode(), and using the provided 'cvt' tool
that comes with xserver.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit f0d50cc665)
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
(cherry picked from commit f405dfffe7)
At least one revision of the AAO reports a 190x110mm maximum size but a
451x113mm mode.
X.Org Bug 41141 <https://bugs.freedesktop.org/show_bug.cgi?id=41141>
Signed-off-by: Ross Burton <ross@linux.intel.com>
Reviewed-by: Daniel Stone <daniel@fooishbar.org>
Reviewed-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 58864146fb)
The Resource database is reset upon regeneration and so the dri2 module
needs to re-register its RESTYPE for the drawable or else it will
clobber the next unsuspecting user of the database. Fortunately, DRI2 is
loaded late in the initialisation sequence and was last up until
xf86-video-intel started using the Resource database to track
outstanding swaps...
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jeremy Huddleston <jeremyhu@apple.com>
Tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
(cherry picked from commit 34b0e4eee9)
memType is a uint64_t on powerpc. Using memType only really makes
sense for *physical* addresses, which can be 64-bit for 32-bit
systems running on 64-bit hardware.
However, unmapVidMem() only deals with *virtual* addresses, which
are guaranteed to fit into an uintptr_t.
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
Reviewed-by: Mark Kettenis <kettenis@openbsd.org>
(cherry picked from commit eb3377ffb8)
https://bugs.freedesktop.org/show_bug.cgi?id=38420
Exit with fatal error message, not segfault.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Jeremy Huddleston <jeremyhu@apple.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 7d50211ab5)
If you started an X server with no connected outputs, we pick a default
1024x768 mode, however if you then ran an xvidmode using app against that
server it would segfault the server due to not finding any valid modes.
This was due to the no output mode set code, only adding the modes to the
scrn->modes once, when something called randr 1.2 xf86SetScrnInfoModes would
get called and remove all the modes and we'd end up with 0.
This change fixes xf86SetScrnInfoModes to always report a scrn mode of at
least 1024x768, and pushes the initial configuration to just call it instead
of setting up the mode itself.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=746926
I've seen other bugs like this on other distros so it might also actually fix them.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 17416e88dc)
Even though it's only valid when local, it is possible for a local
client and the server to not match endianness, such as when running
a ppc application under Rosetta.
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 1c8bda798b)
Even though it's only valid when local, it is possible for a local
client and the server to not match endianness, such as when running
a ppc application under Rosetta.
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
Reviewed-by: Jamey Sharp <jamey@minilop.net>
(cherry picked from commit 14205ade0c)
Conflicts:
hw/xquartz/xpr/appledri.c
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
This was a regression.
Introduced by: 08363c5830 and
32db27a7f8
Masked by: 1e69fd4a60
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
Reviewed-by: Jamey Sharp <jamey@minilop.net>
(cherry picked from commit 83fef4235d)
Bug introduced by 9dca441670
xfree86: add a hook to replace the new console handler.
console_handler was not being set, making the server eat up CPU spinning
in WaitForSomething selecting consoleFd over and over again, every time
trying to unregister drain_console without success due to
console_handler being NULL.
Let's just fix the unregistration in xf86SetConsoleHandler() and use that.
But wait, there could be a catch: If some driver replaced the handler using
xf86SetConsoleHandler(), the unregistration in xf86CloseConsole will unregister
that one. I don't understand Xorg well enough to know whether this poses a
problem (could mess up driver deinit somehow or something like that). As it is,
xf86SetConsoleHandler() doesn't offer any way to prevent this (i.e. check which
handler is currently registered).
I had been using it for two days on my machine that previously hit 100% CPU
several times a day. That has now gone away without any new problems appearing.
Signed-off-by: Tomas Trnka <tomastrnka@gmx.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
(cherry picked from commit 323869f329)
It's fairly common to have multiple, identical monitors plugged in. In
that case, it's preferable to run the monitor's preferred mode on each
output, rather than just matching the width & height and end up with
different timings or refresh rates.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 3e145d3d67)
The video driver ABI was bumped to 11.0 in commit
0de7cec907 because of a change to the
size of ATOM in commit 51f353d0a0. This
also affects extension modules, so the extension ABI version should
have been bumped too.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Keith Packard <keithp@keithp.com>
Fixes assertion failure when calling dixSetPrivate
Debian bug#632549 <http://bugs.debian.org/632549>
Reported-and-tested-by: Mohammed Sameer <msameer@foolab.org>
Signed-off-by: Julien Cristau <jcristau@debian.org>
Reviewed-by: Daniel Stone <daniel@fooishbar.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
Instead of just closing the log when everything is done, put one more
message in stating that we're actually terminating. Users or scripts that
look at the Xorg.log will then know that a) the server has terminated
properly and b) why the server terminated (to some degree, given that most
real-world errors will be caused by AbortServer()).
Acked-by: Gaetan Nadon <memsize@videotron.ca>
Reviewed-by: Daniel Stone <daniel@fooishbar.org>
Tested-by: Jeremy Huddleston <jeremyhu@apple.com>
Reviewed-by: Jeremy Huddleston <jeremyhu@apple.com>
Tested-by: Jon TURNEY <jon.turney@dronecode.org.uk>
Reviewed-by: Jon TURNEY <jon.turney@dronecode.org.uk>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
For hotplugged devices, xf86AllocateInput does that for us but the xorg.conf
path is different. Since not all drivers reset the fd during PreInit but may
still call close(pInfo->fd) in all cases, this can terminate the logging
early.
Reproducible: add a wacom driver InputDevice section with no Option Device.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Daniel Stone <daniel@fooishbar.org>
xf86ConfigLayout.inputs contains the information from the xorg.conf
file. Passing this into xf86NewInputDevice means the device will get
cleaned up on exit and the pointers in xf86ConfigLayout.inputs are left
dangling. In the second server generation, this results in a server
crash.
Also, rename pDev to pInfo. pDev is pretty much reserved for DeviceIntPtr
types.
Reproducible: AutoAddDevices off and xorg.conf input sections, trigger
server regeneration.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Daniel Stone <daniel@fooishbar.org>
Devices that succeeded during PreInit and DEVICE_INIT but failed in
DEVICE_ON would be deleted through xf86DeleteInput but not removed from the
list of input devices (and not turned off). The result was a double free on
server shutdown.
Fix this by calling RemoveDevice if EnableDevice fails.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Daniel Stone <daniel@fooishbar.org>