All of this is always the same steps, let's use a macro so we can limit
our code to the bits we actually need to do.
This requires renaming some of the protocol itself to match up with the
expectations - not the worst thing since it makes the protocol more
consistent.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This keeps most of the current structure but gets rid of client-side
keymaps (which have been broken since the server-side devices anyway).
The new approach: clients get a keymap (or NULL) from the server, if
they don't like it they will have to do transformation on their side.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This isn't something that libei itself uses but clients like synergy
need to know about this to be able to map relative pointer motion from
one host into the right physical pixel on another host.
This is required for mutter in the x11-compat mode where a 4k screen is
logically twice the size of a 2k screen, despite having the same
physical size.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This is required for supporting synergy/barrier and similar clients.
Replacing the touch and pointer range we now have server-defined
rectangular regions that specify the active zones for this device.
For example, a dual-monitor EIS server would create two touch devices
with one region each for the respective monitors - libei-generated
touches would thus fall on the right area of the monitor. Or just one
device with one region if the second screen should be inaccessible.
A relative device may have multiple regions since it can reach all
screens in the layout.
This leaks the screen layout to libei but that is necessary for the
functionality to work. A libei client may need to control devices
through absolute coordinates and it needs to know where screen
transitions from one to the next screen happen:
+-----------++----------------+
| || |
| B||Q |
| |+----------------+
| |
| A|P
+-----------+
In the above example, position P is unreachable and a client that
controls input on both screens must know that it cannot transition from
A to P but it can transition from B to Q.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This changes the protocol so that it is the EIS implementation that
creates devices within a seat.
A client now "binds" to a seat and the EIS implementation creates
devices matching the requested capabilities. A client can close a device
if it no longer wants those but otherwise everything (including pointer
ranges) is handled by the server.
This is one giant patch because changes at the protocol level cannot
easily be broken out into smaller patches. Some FIXMEs are left which
will be handled in follow-up patches, e.g. the keymap handling is
basically broken right now.
Fixes#7
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
After CONNECT, the EIS implementation needs to add one or more seats. The
libei client can only create devices within those seats. This mirrors the
wayland hierarchy as well as the X.Org one.
The seat has a set of allowed capabilities, so the client knows ahead of time
when it may not be possible to create a specific device.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Client-side the approach is a managed touch object rather than passing the
touchid around. This is intentional, it allows for a stackable API in the
future if we need to add things like pressure or major/minor to it.
On the server side the touches are managed through the event object anyway, so
we don't need the same abstraction there.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This was already spelled out in the documentation but just not yet
implemented. New starting state for any device added by EIS is "suspended",
the server needs to explicitly resume it before events are accepted.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Original idea of 1/1000 of a pixel was to allow subpixels while sending fixed
width down the wire. Let's not care about that and use doubles instead.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
The protocol itself could work like originally described here but there's a
stumbling block: the decision on whether to accept a device is made by the
caller through EIS_EVENT_DEVICE_ADDED and the following eis_device_connect()
call. We cannot process any events from that device until that call is
complete and that effectively disallows batch submission of requests.
To allow batching we'd have to pause the protocol but that means missing out
on other devices (and their events) and disconnect events. The alternative to
that would be for libeis to peek at incoming requests and sort them by device
ID so we only pause one device's stream but now we're also mangling the device
event order and potentially triggering all sorts of side-effects.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
For the Portal case, we'll have the portal open the sockets for us and then
(depending on policy) restrict what the client can do. Then the socket can be
passed to the client with e.g. keyboards disabled and the client is none the
wiser (other than that the server will reject any keyboard caps).
Since the portal doesn't need a EI context, the configuration is a separate
small library.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
The main purpose of that was for (plain-text protocol) debugging. With the
current intentions to "preload" a connection with restrictions, having the
server initiate a connection is not useful.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
protobuf relies on external framing and exact buffer lengths to parse things
correctly. So let's provide that by sending a fixed-length Frame message
before every real message.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Plain-text was useful for the initial implementation where the counterpart was
netcat but now that both parts are in place, protobuf is a much more
convenient system to handle a frequently-changing protocol.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>