In the case of a refused portal connection we get disconnected before we
had a chance to send the CONNECT message. This still needs to queue a
disconnect event and thus bubble back up to the caller.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
If we have libreis setting properties on the connection, this looks to
the server like they're coming from the ei connection. But libreis is a
different context than the libei context later, so when libei
initializes, it may set the same properties again. Since the libei
context doesn't know about the properties, it can't filter internally
and will send the properties to the server.
So we need to do all the permissions checks in the server to make sure
we don't overwrite values that we're not allowed to overwrite.
There are no real restrictions on changing properties from within the
eis implementation (other than not being able to set the reserved
namespaces).
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Pass the fd into the original context creation, then write any changes
to the wire immediately. For the capabilities that means we can't build
them up as before anymore, so change the API to have a vararg function
and require the allowed capabilities to be passed in.
There's likely little use for the previous allow-vs-deny policy etc, so
let's not make things more complicated an they have to be.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
There is data that libei and the EIS implementation will want to
exchange that is not covered by the immediate API.
To avoid having to add APIs for all of these, let's provide a generic
property API that both server and client can use to exchange this info.
The property API provides read/write/delete permissions but those only
apply to the client, not the server. The idea is that a server can
create (or restrict) properties that the client can read but not modify
and/or delete. A special-case are properties filled in automatically by
libei: ei.application.pid and ei.application.cmdline. These could be
used by e.g. the portal implementation to match permissions.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This allows us to transmit extra information about the client before the
server has to decide on whether it wants to connect us.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This effectively provides the EIS implementation with a notification
that the client will actually send events in the near future. To be used
by e.g. synergy-like clients when the pointer enters the logical screen
so that the EIS implementation can flash a warning or something.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This allows a client to trigger kinetic scrolling (or prevent it).
For compositors implementing EIS, the only realistic scroll source is
continuous which allows for scroll stop events. So let's give the client
the opportunity to trigger those on demand.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Already present in e.g. libinput and wayland, this event allows us to
group several events together to denote them as a logical group.
Required for multi-touch but as we've learned with Wayland it's also
required to group other events (scroll events in the case of Wayland).
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Since the server controls the keymap, and that keymap is likely merged
with some other device let's add the events so we notify the client of
things like numlock-is-down etc.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
All the switch cases return early, moving it here means we can drop the
default case and have the compiler warn us about missing cases.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
There's nothing in the protocol to modify the client device state from
the server, so a pause/resume cycle must leave the client with the
same(-ish) state. Pause is really just that, a short "no event now
please". Anything that would require e.g. modifying the device state by
releasing keys or buttons should result in the device being removed and
re-added.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Filters out the likely bug of a caller hoping that buttons 1/2/3 will
work (as opposed to BTN_LEFT, etc.)
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
All of this is always the same steps, let's use a macro so we can limit
our code to the bits we actually need to do.
This requires renaming some of the protocol itself to match up with the
expectations - not the worst thing since it makes the protocol more
consistent.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This gets rid of one layer of indirection with the struct message in
between - we can now call from the proto parser into the handling code
directly.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This keeps most of the current structure but gets rid of client-side
keymaps (which have been broken since the server-side devices anyway).
The new approach: clients get a keymap (or NULL) from the server, if
they don't like it they will have to do transformation on their side.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This is slightly inconsistent with the configure API but more consistent
with the device API (which also has a new() + add()). It reduces
potential bugs though because the region cannot be added to two devices
anymore, and this way we also get a context in the region from the start
(which means we can log).
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This isn't something that libei itself uses but clients like synergy
need to know about this to be able to map relative pointer motion from
one host into the right physical pixel on another host.
This is required for mutter in the x11-compat mode where a 4k screen is
logically twice the size of a 2k screen, despite having the same
physical size.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This is required for supporting synergy/barrier and similar clients.
Replacing the touch and pointer range we now have server-defined
rectangular regions that specify the active zones for this device.
For example, a dual-monitor EIS server would create two touch devices
with one region each for the respective monitors - libei-generated
touches would thus fall on the right area of the monitor. Or just one
device with one region if the second screen should be inaccessible.
A relative device may have multiple regions since it can reach all
screens in the layout.
This leaks the screen layout to libei but that is necessary for the
functionality to work. A libei client may need to control devices
through absolute coordinates and it needs to know where screen
transitions from one to the next screen happen:
+-----------++----------------+
| || |
| B||Q |
| |+----------------+
| |
| A|P
+-----------+
In the above example, position P is unreachable and a client that
controls input on both screens must know that it cannot transition from
A to P but it can transition from B to Q.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This is useful for debugging, let's pass it through and let the log
handler decide whether to use it or not.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>