2020-12-23 22:21:36 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
2019-09-25 13:13:40 +02:00
|
|
|
/*
|
2016-07-01 12:11:01 +02:00
|
|
|
* Copyright (C) 2016 Red Hat, Inc.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#ifndef __NETWORKMANAGER_CHECKPOINT_H__
|
|
|
|
|
#define __NETWORKMANAGER_CHECKPOINT_H__
|
|
|
|
|
|
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
|
|
|
#include "nm-dbus-object.h"
|
2016-07-01 12:11:01 +02:00
|
|
|
#include "nm-dbus-interface.h"
|
|
|
|
|
|
|
|
|
|
#define NM_TYPE_CHECKPOINT (nm_checkpoint_get_type())
|
|
|
|
|
#define NM_CHECKPOINT(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), NM_TYPE_CHECKPOINT, NMCheckpoint))
|
|
|
|
|
#define NM_CHECKPOINT_CLASS(klass) \
|
|
|
|
|
(G_TYPE_CHECK_CLASS_CAST((klass), NM_TYPE_CHECKPOINT, NMCheckpointClass))
|
|
|
|
|
#define NM_IS_CHECKPOINT(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), NM_TYPE_CHECKPOINT))
|
|
|
|
|
#define NM_IS_CHECKPOINT_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), NM_TYPE_CHECKPOINT))
|
|
|
|
|
#define NM_CHECKPOINT_GET_CLASS(obj) \
|
|
|
|
|
(G_TYPE_INSTANCE_GET_CLASS((obj), NM_TYPE_CHECKPOINT, NMCheckpointClass))
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-07-01 12:11:01 +02:00
|
|
|
#define NM_CHECKPOINT_DEVICES "devices"
|
|
|
|
|
#define NM_CHECKPOINT_CREATED "created"
|
|
|
|
|
#define NM_CHECKPOINT_ROLLBACK_TIMEOUT "rollback-timeout"
|
|
|
|
|
|
2018-03-27 12:45:23 +02:00
|
|
|
typedef struct _NMCheckpointPrivate NMCheckpointPrivate;
|
|
|
|
|
|
|
|
|
|
typedef struct {
|
|
|
|
|
NMDBusObject parent;
|
|
|
|
|
NMCheckpointPrivate *_priv;
|
|
|
|
|
CList checkpoints_lst;
|
|
|
|
|
} NMCheckpoint;
|
|
|
|
|
|
2016-09-29 13:49:01 +02:00
|
|
|
typedef struct _NMCheckpointClass NMCheckpointClass;
|
|
|
|
|
|
2016-07-01 12:11:01 +02:00
|
|
|
GType nm_checkpoint_get_type(void);
|
|
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
NMCheckpoint *nm_checkpoint_new(NMManager *manager,
|
|
|
|
|
GPtrArray *devices,
|
2016-07-01 12:11:01 +02:00
|
|
|
guint32 rollback_timeout,
|
2018-03-28 07:08:54 +02:00
|
|
|
NMCheckpointCreateFlags flags);
|
2016-07-01 12:11:01 +02:00
|
|
|
|
2018-03-27 19:02:15 +02:00
|
|
|
typedef void (*NMCheckpointTimeoutCallback)(NMCheckpoint *self, gpointer user_data);
|
|
|
|
|
|
checkpoint: allow overlapping checkpoints
Introduce a new flag NM_CHECKPOINT_CREATE_FLAG_ALLOW_OVERLAPPING
that allows the creation of overlapping checkpoints. Before, and
by default, checkpoints that reference a same device conflict,
and creating such a checkpoint failed.
Now, allow this. But during rollback automatically destroy all
overlapping checkpoints that were created after the checkpoint
that is about to rollback.
With this, you can create a series of checkpoints, and rollback them
individually. With the restriction, that if you once rolled back to an
older checkpoint, you no longer can roll"forward" to a younger one.
What this implies and what is new here, is that the checkpoint might be
automatically destroyed by NetworkManager before the timeout expires. When
the user later would try to manually destroy/rollback such a checkpoint, it
would fail because the checkpoint no longer exists.
2018-03-28 07:06:10 +02:00
|
|
|
void nm_checkpoint_log_destroy(NMCheckpoint *self);
|
|
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
void nm_checkpoint_set_timeout_callback(NMCheckpoint *self,
|
2018-03-27 19:02:15 +02:00
|
|
|
NMCheckpointTimeoutCallback callback,
|
|
|
|
|
gpointer user_data);
|
|
|
|
|
|
2016-07-01 12:11:01 +02:00
|
|
|
GVariant *nm_checkpoint_rollback(NMCheckpoint *self);
|
|
|
|
|
|
checkpoint: allow resetting the rollback timeout via D-Bus
This allows to adjust the timeout of an existing checkpoint.
The main usecase of checkpoints, is to have a fail-safe when
configuring the network remotely. By allowing to reset the timeout,
the user can perform a series of actions, and keep bumping the
timeout. That way, the entire series is still guarded by the same
checkpoint, but the user can start with short timeout, and
re-adjust the timeout as he goes along.
The libnm API only implements the async form (at least for now).
Sync methods are fundamentally wrong with D-Bus, and it's probably
not needed. Also, follow glib convenction, where the async form
doesn't have the _async name suffix. Also, accept a D-Bus path
as argument, not a NMCheckpoint instance. The libnm API should
not be more restricted than the underlying D-Bus API. It would
be cumbersome to require the user to lookup the NMCheckpoint
instance first, especially since libnm doesn't provide an efficient
or convenient lookup-by-path method. On the other hand, retrieving
the path from a NMCheckpoint instance is always possible.
2018-03-28 08:09:56 +02:00
|
|
|
void nm_checkpoint_adjust_rollback_timeout(NMCheckpoint *self, guint32 add_timeout);
|
|
|
|
|
|
checkpoint: allow overlapping checkpoints
Introduce a new flag NM_CHECKPOINT_CREATE_FLAG_ALLOW_OVERLAPPING
that allows the creation of overlapping checkpoints. Before, and
by default, checkpoints that reference a same device conflict,
and creating such a checkpoint failed.
Now, allow this. But during rollback automatically destroy all
overlapping checkpoints that were created after the checkpoint
that is about to rollback.
With this, you can create a series of checkpoints, and rollback them
individually. With the restriction, that if you once rolled back to an
older checkpoint, you no longer can roll"forward" to a younger one.
What this implies and what is new here, is that the checkpoint might be
automatically destroyed by NetworkManager before the timeout expires. When
the user later would try to manually destroy/rollback such a checkpoint, it
would fail because the checkpoint no longer exists.
2018-03-28 07:06:10 +02:00
|
|
|
NMDevice *
|
|
|
|
|
nm_checkpoint_includes_devices(NMCheckpoint *self, NMDevice *const *devices, guint n_devices);
|
|
|
|
|
NMDevice *nm_checkpoint_includes_devices_of(NMCheckpoint *self, NMCheckpoint *cp_for_devices);
|
|
|
|
|
|
2016-07-01 12:11:01 +02:00
|
|
|
#endif /* __NETWORKMANAGER_CHECKPOINT_H__ */
|