Even if the client is no longer emulating (and we're not queuing an
event) our local touch must be ended. Otherwise our touch will live on
forever, despite the client thinking it has ended properly.
This can't be triggered with libei because we can't send touch events
while not emulating and the only way to get into this state is pausing
the device - which already resets the state. But let's add a test case
anyway, in the hope that one day it picks up a bug.
Part-of: <https://gitlab.freedesktop.org/libinput/libei/-/merge_requests/368>
This makes peck_new_context() take variable arguments in the style
key, v1, v2... where each value is defined by the free-form key.
What we need right now is the mode of the context, so let's add that.
In the future we will add more configurable bits here.
Part-of: <https://gitlab.freedesktop.org/libinput/libei/-/merge_requests/351>
This event is required to fix an issue with the current
ei_callback.done handling in libeis: previously we would imediately send
the ei_callback.done upon receiving ei_connection.sync from the client.
This results in incorrect behavior as we may have events in the queue
(and/or pending a frame) that the EIS implementation hasn't seen yet.
For example a client sending:
- ei_pointer.motion
- ei_connection.sync
- ei_device.frame
- ei_connection.sync
Will queue a motion + frame event on the EIS side during eis_dispatch()
but immediately receive done events for both sync requests.
This could be handled purely internally by keeping the sync event in the
queue but hidden to the caller - and automatically calling done when
it's that events turn, i.e. something like:
```
struct eis_event *eis_get_event(struct eis) {
struct eis_event *e = first_event(eis);
if (e == EIS_EVENT_SYNC) {
eis_callback_send_done(e);
eis_event_unref(e);
e = next_event(eis);
}
return e;
}
```
but that opens us up to a set of potential bugs detailed in
https://gitlab.freedesktop.org/libinput/libei/-/issues/71#note_2694603
So let's go the easy route by having a new event type that does nothing
other than eis_event_unref() in the EIS implementation. This way we can
queue the event and have everything behave in-order and as expected.
Closes#71
Part-of: <https://gitlab.freedesktop.org/libinput/libei/-/merge_requests/316>
In the protocol it's a new request/event that is sent instead of the
touch up event.
In the library this is implemented as ei_touch_cancel() which
transparently sends cancel() or up depending on the EIS implementation
support. This is mirrored for the EIS implementation.
Where touch cancel is received as an event it is presented
as EI_EVENT_TOUCH_UP with an ei_event_touch_is_cancel() flag to
check if it was a cancel. This is required for backwards compatbility,
we cannot replace the TOUCH_UP event with a TOUCH_CANCEL event without
breaking existing callers. To add a new event type we would need clients
announcing support for those event types but that's an effort that's
better postponed until we have a stronger need for it (see #68).
Closes#60
Part-of: <https://gitlab.freedesktop.org/libinput/libei/-/merge_requests/308>
Using MAP_PRIVATE means copy-on-write so our data written into the map
immediately disappears again. This leads to a empty string when sending
a keymap to a client.
We don't want to paper over bugs in the implementation, so let's make
any ei/eis error message with a bug in it fatal by default.
This needs to be disabled where we test for known-buggy client/EIS
behavior.
This allows a caller to match up a region with other data, e.g. in the
remote desktop case the same mapping_id can be assigned to the pipewire
stream that represents that output.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Same as the corresponding ei change a few commits ago, this one does all
the EIS renaming in the same manner.
As with the libei changes, an EIS implementation must now handle the
EIS_DEVICE_CAP_BUTTON and EI_DEVICE_CAP_SCROLL capabilities. In
virtually all cases, clients will likely expect that a device with the
pointer or absolute pointer capabilities will also have button and
scroll capabilities.
Now that the protocol interfaces are more fine-grained, let's match this
with the C API too.
This is just a rename of things so that in general
ei_pointer_*foo now becomes ei_foo*.
A few notable renames for better readability here:
- ei_device_scroll_delta (because scroll_scroll is awkward)
- ei_event_scroll_get_dx/dy and
ei_event_scroll_get_discrete_dx/dy to indicate the delta-ness
Beyond that, clients must ensure to check/bind to the new
EI_DEVICE_CAP_BUTTON and EI_DEVICE_CAP_SCROLL capabilities to be able
to send button or scroll events.
Note that this API now allows for an EIS implementation to send a device
that only has a button or a scroll cap. Or a pointer cap without
buttons, etc. It's up to the clients how to handle such devices
(probably: ignore them).
`memfd_create` doesn't seem to be supported on
all platforms (e.g. ubuntu 18 has trouble with it).
Even though, I was able to substitute `memfd_create`
with a direct system call, I was not able to get
the `MFD_CLOXEC` flag (from fcntl.h) working cleanly
(there were redefinitions/conflicts for other
structures when trying to use <linux/*> headers).
Making it optional for time being till we have
figured out how to make it work broadly.
The check is currently missing from a number of libeis APIs but in most
cases we can blame the EIS implementation and say "don't do this".
Device removal is an exception since that is still required.
This makes it easier to correlate a particular input transaction
(whether there are events or not) with out-of-band information like the
planned portal InputCapture::Activated signal's "activation-id".
All we do here is decide whether the connect event gets handled, clients
are always effectively connected (i.e. the client does send the connect
request) since we set up the backend during init.
Incoming device events are now added to a device-internal queue. Once
the frame event comes in, that queue is shuffled over to the main event
queue. For libei/the EIS implementation this means that device events
are seen only once the frame event appears from the sender (or it is
emulated by other means).
Currently only implemented for frame events, the vague plan for the
future is to merely queue the device events internally and "release"
them once a frame event was received, retrofitting the timestamp to the
C event struct (i.e. making ei_event_get_time() available on all device
events).
Meanwhile, the frame event it is.
Because we're doing this per dispatch call (rather than a device state)
we need to ensure that the various tests gobble up all pending frame
events - the assert_no_events helpers do this.
If we only check for specific events, a frame event may still be pending
from one interaction. This causing the assertion to fail on the
subsequent dispatch call.
We only need frame events after device events (pointer, touch,
keyboard). In some cases, the library prevents an event from being
written to the wire, e.g. if the coordinates are out of region, but the
client will still call ei_device_frame() for that now-filtered event.
Keep a boolean to remember if we have sent something that requires a
frame event and filter accordingly.
Note that this currently filters the *sender* side only, not the
receiver side. A sender that gets an empty frame event onto the wire
will still get that into the other side.
This also doesn't handle the flushing of frame events before other
events, ideally we should enforce a frame event before e.g. stop
emulating.
Destroying a touch causes ei_touch_up if the touch is still down. But
for the event sequence to be correct we also need to add a frame event
here, otherwise the touch up may be "pending" on the remote until the
next actual event happens and a frame is added.
A client that doesn't want this should just call ei_touch_up()
With passive libei contexts receiving events sent by the EIS
implementation, the type of device changes significantly. While a
relative input device could still send data in logical pixels,
absolute devices may not have that luxury.
Best example here is an external tablet (think: Wacom Intuos): that
tablet has no built-in mapping to a screen and thus cannot capture input
events in logical pixels.
Address this by adding a device type, either virtual or physical.
In terms of functionality, the device's type decides:
- only virtual devices have regions
- only physical devices have a size
The event API remains as-is but the event data not represents either
logical pixels (virtual devices) or mm (physical device).
An EIS implementation connected to a passive libei context would likely
create:
- a virtual relative device (sending deltas in logical pixels)
- one or more physical absolute devices (sending deltas in mm)
This is a leftover from an earlier implementation that didn't get
removed in time. This extends to a macro that was using the context flag
(rather than the client flag) and in turn caused a bunch of false
positives on the tests.
This effectively provides the EIS implementation with a notification
that the client will actually send events in the near future. To be used
by e.g. synergy-like clients when the pointer enters the logical screen
so that the EIS implementation can flash a warning or something.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This allows a client to trigger kinetic scrolling (or prevent it).
For compositors implementing EIS, the only realistic scroll source is
continuous which allows for scroll stop events. So let's give the client
the opportunity to trigger those on demand.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Already present in e.g. libinput and wayland, this event allows us to
group several events together to denote them as a logical group.
Required for multi-touch but as we've learned with Wayland it's also
required to group other events (scroll events in the case of Wayland).
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Since the server controls the keymap, and that keymap is likely merged
with some other device let's add the events so we notify the client of
things like numlock-is-down etc.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This is slightly inconsistent with the configure API but more consistent
with the device API (which also has a new() + add()). It reduces
potential bugs though because the region cannot be added to two devices
anymore, and this way we also get a context in the region from the start
(which means we can log).
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This isn't something that libei itself uses but clients like synergy
need to know about this to be able to map relative pointer motion from
one host into the right physical pixel on another host.
This is required for mutter in the x11-compat mode where a 4k screen is
logically twice the size of a 2k screen, despite having the same
physical size.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>