NetworkManager/src/core/nm-active-connection.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

191 lines
8.5 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2008 - 2012 Red Hat, Inc.
*/
#ifndef __NETWORKMANAGER_ACTIVE_CONNECTION_H__
#define __NETWORKMANAGER_ACTIVE_CONNECTION_H__
#include "c-list/src/c-list.h"
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API Previously, we used the generated GDBusInterfaceSkeleton types and glued them via the NMExportedObject base class to our NM types. We also used GDBusObjectManagerServer. Don't do that anymore. The resulting code was more complicated despite (or because?) using generated classes. It was hard to understand, complex, had ordering-issues, and had a runtime and memory overhead. This patch refactors this entirely and uses the lower layer API GDBusConnection directly. It replaces the generated code, GDBusInterfaceSkeleton, and GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager and static descriptor instances of type GDBusInterfaceInfo. This adds a net plus of more then 1300 lines of hand written code. I claim that this implementation is easier to understand. Note that previously we also required extensive and complex glue code to bind our objects to the generated skeleton objects. Instead, now glue our objects directly to GDBusConnection. The result is more immediate and gets rid of layers of code in between. Now that the D-Bus glue us more under our control, we can address issus and bottlenecks better, instead of adding code to bend the generated skeletons to our needs. Note that the current implementation now only supports one D-Bus connection. That was effectively the case already, although there were places (and still are) where the code pretends it could also support connections from a private socket. We dropped private socket support mainly because it was unused, untested and buggy, but also because GDBusObjectManagerServer could not export the same objects on multiple connections. Now, it would be rather straight forward to fix that and re-introduce ObjectManager on each private connection. But this commit doesn't do that yet, and the new code intentionally supports only one D-Bus connection. Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start() succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough for the moment. It could be easily extended later, for example with polling whether the system bus appears (like was done previously). Also, restart of D-Bus daemon isn't supported either -- just like before. Note how NMDBusManager now implements the ObjectManager D-Bus interface directly. Also, this fixes race issues in the server, by no longer delaying PropertiesChanged signals. NMExportedObject would collect changed properties and send the signal out in idle_emit_properties_changed() on idle. This messes up the ordering of change events w.r.t. other signals and events on the bus. Note that not only NMExportedObject messed up the ordering. Also the generated code would hook into notify() and process change events in and idle handle, exhibiting the same ordering issue too. No longer do that. PropertiesChanged signals will be sent right away by hooking into dispatch_properties_changed(). This means, changing a property in quick succession will no longer be combined and is guaranteed to emit signals for each individual state. Quite possibly we emit now more PropertiesChanged signals then before. However, we are now able to group a set of changes by using standard g_object_freeze_notify()/g_object_thaw_notify(). We probably should make more use of that. Also, now that our signals are all handled in the right order, we might find places where we still emit them in the wrong order. But that is then due to the order in which our GObjects emit signals, not due to an ill behavior of the D-Bus glue. Possibly we need to identify such ordering issues and fix them. Numbers (for contrib/rpm --without debug on x86_64): - the patch changes the code size of NetworkManager by - 2809360 bytes + 2537528 bytes (-9.7%) - Runtime measurements are harder because there is a large variance during testing. In other words, the numbers are not reproducible. Currently, the implementation performs no caching of GVariants at all, but it would be rather simple to add it, if that turns out to be useful. Anyway, without strong claim, it seems that the new form tends to perform slightly better. That would be no surprise. $ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done) - real 1m39.355s + real 1m37.432s $ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done) - real 0m26.843s + real 0m25.281s - Regarding RSS size, just looking at the processes in similar conditions, doesn't give a large difference. On my system they consume about 19MB RSS. It seems that the new version has a slightly smaller RSS size. - 19356 RSS + 18660 RSS
2018-02-26 13:51:52 +01:00
#include "nm-connection.h"
#include "nm-dbus-object.h"
#define NM_TYPE_ACTIVE_CONNECTION (nm_active_connection_get_type())
#define NM_ACTIVE_CONNECTION(obj) \
(_NM_G_TYPE_CHECK_INSTANCE_CAST((obj), NM_TYPE_ACTIVE_CONNECTION, NMActiveConnection))
#define NM_ACTIVE_CONNECTION_CLASS(klass) \
(G_TYPE_CHECK_CLASS_CAST((klass), NM_TYPE_ACTIVE_CONNECTION, NMActiveConnectionClass))
#define NM_IS_ACTIVE_CONNECTION(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), NM_TYPE_ACTIVE_CONNECTION))
#define NM_IS_ACTIVE_CONNECTION_CLASS(klass) \
(G_TYPE_CHECK_CLASS_TYPE((klass), NM_TYPE_ACTIVE_CONNECTION))
#define NM_ACTIVE_CONNECTION_GET_CLASS(obj) \
(G_TYPE_INSTANCE_GET_CLASS((obj), NM_TYPE_ACTIVE_CONNECTION, NMActiveConnectionClass))
/* D-Bus Exported Properties */
#define NM_ACTIVE_CONNECTION_CONNECTION "connection"
#define NM_ACTIVE_CONNECTION_ID "id"
#define NM_ACTIVE_CONNECTION_UUID "uuid"
#define NM_ACTIVE_CONNECTION_TYPE "type"
#define NM_ACTIVE_CONNECTION_SPECIFIC_OBJECT "specific-object"
#define NM_ACTIVE_CONNECTION_DEVICES "devices"
#define NM_ACTIVE_CONNECTION_STATE "state"
#define NM_ACTIVE_CONNECTION_STATE_FLAGS "state-flags"
#define NM_ACTIVE_CONNECTION_DEFAULT "default"
#define NM_ACTIVE_CONNECTION_IP4_CONFIG "ip4-config"
#define NM_ACTIVE_CONNECTION_DHCP4_CONFIG "dhcp4-config"
#define NM_ACTIVE_CONNECTION_DEFAULT6 "default6"
#define NM_ACTIVE_CONNECTION_IP6_CONFIG "ip6-config"
#define NM_ACTIVE_CONNECTION_DHCP6_CONFIG "dhcp6-config"
#define NM_ACTIVE_CONNECTION_VPN "vpn"
#define NM_ACTIVE_CONNECTION_MASTER "master"
#define NM_ACTIVE_CONNECTION_CONTROLLER "controller"
/* Internal non-exported properties */
#define NM_ACTIVE_CONNECTION_INT_SETTINGS_CONNECTION "int-settings-connection"
#define NM_ACTIVE_CONNECTION_INT_APPLIED_CONNECTION "int-applied-connection"
#define NM_ACTIVE_CONNECTION_INT_DEVICE "int-device"
#define NM_ACTIVE_CONNECTION_INT_SUBJECT "int-subject"
#define NM_ACTIVE_CONNECTION_INT_CONTROLLER "int-controller"
#define NM_ACTIVE_CONNECTION_INT_CONTROLLER_READY "int-controller-ready"
#define NM_ACTIVE_CONNECTION_INT_ACTIVATION_TYPE "int-activation-type"
#define NM_ACTIVE_CONNECTION_INT_ACTIVATION_REASON "int-activation-reason"
/* Signals */
#define NM_ACTIVE_CONNECTION_STATE_CHANGED "state-changed"
/* Internal signals*/
#define NM_ACTIVE_CONNECTION_DEVICE_CHANGED "device-changed"
#define NM_ACTIVE_CONNECTION_DEVICE_METERED_CHANGED "device-metered-changed"
#define NM_ACTIVE_CONNECTION_PARENT_ACTIVE "parent-active"
struct _NMActiveConnectionPrivate;
struct _NMActiveConnection {
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API Previously, we used the generated GDBusInterfaceSkeleton types and glued them via the NMExportedObject base class to our NM types. We also used GDBusObjectManagerServer. Don't do that anymore. The resulting code was more complicated despite (or because?) using generated classes. It was hard to understand, complex, had ordering-issues, and had a runtime and memory overhead. This patch refactors this entirely and uses the lower layer API GDBusConnection directly. It replaces the generated code, GDBusInterfaceSkeleton, and GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager and static descriptor instances of type GDBusInterfaceInfo. This adds a net plus of more then 1300 lines of hand written code. I claim that this implementation is easier to understand. Note that previously we also required extensive and complex glue code to bind our objects to the generated skeleton objects. Instead, now glue our objects directly to GDBusConnection. The result is more immediate and gets rid of layers of code in between. Now that the D-Bus glue us more under our control, we can address issus and bottlenecks better, instead of adding code to bend the generated skeletons to our needs. Note that the current implementation now only supports one D-Bus connection. That was effectively the case already, although there were places (and still are) where the code pretends it could also support connections from a private socket. We dropped private socket support mainly because it was unused, untested and buggy, but also because GDBusObjectManagerServer could not export the same objects on multiple connections. Now, it would be rather straight forward to fix that and re-introduce ObjectManager on each private connection. But this commit doesn't do that yet, and the new code intentionally supports only one D-Bus connection. Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start() succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough for the moment. It could be easily extended later, for example with polling whether the system bus appears (like was done previously). Also, restart of D-Bus daemon isn't supported either -- just like before. Note how NMDBusManager now implements the ObjectManager D-Bus interface directly. Also, this fixes race issues in the server, by no longer delaying PropertiesChanged signals. NMExportedObject would collect changed properties and send the signal out in idle_emit_properties_changed() on idle. This messes up the ordering of change events w.r.t. other signals and events on the bus. Note that not only NMExportedObject messed up the ordering. Also the generated code would hook into notify() and process change events in and idle handle, exhibiting the same ordering issue too. No longer do that. PropertiesChanged signals will be sent right away by hooking into dispatch_properties_changed(). This means, changing a property in quick succession will no longer be combined and is guaranteed to emit signals for each individual state. Quite possibly we emit now more PropertiesChanged signals then before. However, we are now able to group a set of changes by using standard g_object_freeze_notify()/g_object_thaw_notify(). We probably should make more use of that. Also, now that our signals are all handled in the right order, we might find places where we still emit them in the wrong order. But that is then due to the order in which our GObjects emit signals, not due to an ill behavior of the D-Bus glue. Possibly we need to identify such ordering issues and fix them. Numbers (for contrib/rpm --without debug on x86_64): - the patch changes the code size of NetworkManager by - 2809360 bytes + 2537528 bytes (-9.7%) - Runtime measurements are harder because there is a large variance during testing. In other words, the numbers are not reproducible. Currently, the implementation performs no caching of GVariants at all, but it would be rather simple to add it, if that turns out to be useful. Anyway, without strong claim, it seems that the new form tends to perform slightly better. That would be no surprise. $ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done) - real 1m39.355s + real 1m37.432s $ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done) - real 0m26.843s + real 0m25.281s - Regarding RSS size, just looking at the processes in similar conditions, doesn't give a large difference. On my system they consume about 19MB RSS. It seems that the new version has a slightly smaller RSS size. - 19356 RSS + 18660 RSS
2018-02-26 13:51:52 +01:00
NMDBusObject parent;
struct _NMActiveConnectionPrivate *_priv;
/* active connection can be tracked in a list by NMManager. This is
* the list node. */
CList active_connections_lst;
};
typedef struct {
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API Previously, we used the generated GDBusInterfaceSkeleton types and glued them via the NMExportedObject base class to our NM types. We also used GDBusObjectManagerServer. Don't do that anymore. The resulting code was more complicated despite (or because?) using generated classes. It was hard to understand, complex, had ordering-issues, and had a runtime and memory overhead. This patch refactors this entirely and uses the lower layer API GDBusConnection directly. It replaces the generated code, GDBusInterfaceSkeleton, and GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager and static descriptor instances of type GDBusInterfaceInfo. This adds a net plus of more then 1300 lines of hand written code. I claim that this implementation is easier to understand. Note that previously we also required extensive and complex glue code to bind our objects to the generated skeleton objects. Instead, now glue our objects directly to GDBusConnection. The result is more immediate and gets rid of layers of code in between. Now that the D-Bus glue us more under our control, we can address issus and bottlenecks better, instead of adding code to bend the generated skeletons to our needs. Note that the current implementation now only supports one D-Bus connection. That was effectively the case already, although there were places (and still are) where the code pretends it could also support connections from a private socket. We dropped private socket support mainly because it was unused, untested and buggy, but also because GDBusObjectManagerServer could not export the same objects on multiple connections. Now, it would be rather straight forward to fix that and re-introduce ObjectManager on each private connection. But this commit doesn't do that yet, and the new code intentionally supports only one D-Bus connection. Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start() succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough for the moment. It could be easily extended later, for example with polling whether the system bus appears (like was done previously). Also, restart of D-Bus daemon isn't supported either -- just like before. Note how NMDBusManager now implements the ObjectManager D-Bus interface directly. Also, this fixes race issues in the server, by no longer delaying PropertiesChanged signals. NMExportedObject would collect changed properties and send the signal out in idle_emit_properties_changed() on idle. This messes up the ordering of change events w.r.t. other signals and events on the bus. Note that not only NMExportedObject messed up the ordering. Also the generated code would hook into notify() and process change events in and idle handle, exhibiting the same ordering issue too. No longer do that. PropertiesChanged signals will be sent right away by hooking into dispatch_properties_changed(). This means, changing a property in quick succession will no longer be combined and is guaranteed to emit signals for each individual state. Quite possibly we emit now more PropertiesChanged signals then before. However, we are now able to group a set of changes by using standard g_object_freeze_notify()/g_object_thaw_notify(). We probably should make more use of that. Also, now that our signals are all handled in the right order, we might find places where we still emit them in the wrong order. But that is then due to the order in which our GObjects emit signals, not due to an ill behavior of the D-Bus glue. Possibly we need to identify such ordering issues and fix them. Numbers (for contrib/rpm --without debug on x86_64): - the patch changes the code size of NetworkManager by - 2809360 bytes + 2537528 bytes (-9.7%) - Runtime measurements are harder because there is a large variance during testing. In other words, the numbers are not reproducible. Currently, the implementation performs no caching of GVariants at all, but it would be rather simple to add it, if that turns out to be useful. Anyway, without strong claim, it seems that the new form tends to perform slightly better. That would be no surprise. $ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done) - real 1m39.355s + real 1m37.432s $ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done) - real 0m26.843s + real 0m25.281s - Regarding RSS size, just looking at the processes in similar conditions, doesn't give a large difference. On my system they consume about 19MB RSS. It seems that the new version has a slightly smaller RSS size. - 19356 RSS + 18660 RSS
2018-02-26 13:51:52 +01:00
NMDBusObjectClass parent;
/* re-emits device state changes as a convenience for subclasses for
* device states >= DISCONNECTED.
*/
void (*device_state_changed)(NMActiveConnection *connection,
NMDevice *device,
NMDeviceState new_state,
NMDeviceState old_state,
NMDeviceStateReason reason);
void (*controller_failed)(NMActiveConnection *connection);
void (*device_changed)(NMActiveConnection *connection,
NMDevice *new_device,
NMDevice *old_device);
void (*device_metered_changed)(NMActiveConnection *connection, NMMetered new_value);
void (*parent_active)(NMActiveConnection *connection);
} NMActiveConnectionClass;
guint64 nm_active_connection_version_id_get(NMActiveConnection *self);
guint64 nm_active_connection_version_id_bump(NMActiveConnection *self);
GType nm_active_connection_get_type(void);
typedef void (*NMActiveConnectionAuthResultFunc)(NMActiveConnection *self,
gboolean success,
const char *error_desc,
gpointer user_data);
void nm_active_connection_authorize(NMActiveConnection *self,
NMConnection *initial_connection,
NMActiveConnectionAuthResultFunc result_func,
gpointer user_data);
NMSettingsConnection *nm_active_connection_get_settings_connection(NMActiveConnection *self);
NMConnection *nm_active_connection_get_applied_connection(NMActiveConnection *self);
NMSettingsConnection *_nm_active_connection_get_settings_connection(NMActiveConnection *self);
void nm_active_connection_set_settings_connection(NMActiveConnection *self,
NMSettingsConnection *connection);
gboolean
nm_active_connection_has_unmodified_applied_connection(NMActiveConnection *self,
NMSettingCompareFlags compare_flags);
const char *nm_active_connection_get_settings_connection_id(NMActiveConnection *self);
const char *nm_active_connection_get_specific_object(NMActiveConnection *self);
void nm_active_connection_set_specific_object(NMActiveConnection *self,
const char *specific_object);
void
nm_active_connection_set_default(NMActiveConnection *self, int addr_family, gboolean is_default);
gboolean nm_active_connection_get_default(NMActiveConnection *self, int addr_family);
NMActiveConnectionState nm_active_connection_get_state(NMActiveConnection *self);
void nm_active_connection_set_state(NMActiveConnection *self,
NMActiveConnectionState state,
NMActiveConnectionStateReason reason);
void nm_active_connection_set_state_fail(NMActiveConnection *active,
NMActiveConnectionStateReason reason,
const char *error_desc);
#define NM_ACTIVATION_STATE_FLAG_IP_READY_X(IS_IPv4) \
((IS_IPv4) ? NM_ACTIVATION_STATE_FLAG_IP4_READY : NM_ACTIVATION_STATE_FLAG_IP6_READY)
NMActivationStateFlags nm_active_connection_get_state_flags(NMActiveConnection *self);
void nm_active_connection_set_state_flags_full(NMActiveConnection *self,
NMActivationStateFlags state_flags,
NMActivationStateFlags mask);
static inline void
nm_active_connection_set_state_flags(NMActiveConnection *self, NMActivationStateFlags state_flags)
{
nm_active_connection_set_state_flags_full(self, state_flags, state_flags);
}
core: improve and fix keeping connection active based on "connection.permissions" By setting "connection.permissions", a profile is restricted to a particular user. That means for example, that another user cannot see, modify, delete, activate or deactivate the profile. It also means, that the profile will only autoconnect when the user is logged in (has a session). Note that root is always able to activate the profile. Likewise, the user is also allowed to manually activate the own profile, even if no session currently exists (which can easily happen with `sudo`). When the user logs out (the session goes away), we want do disconnect the profile, however there are conflicting goals here: 1) if the profile was activate by root user, then logging out the user should not disconnect the profile. The patch fixes that by not binding the activation to the connection, if the activation is done by the root user. 2) if the profile was activated by the owner when it had no session, then it should stay alive until the user logs in (once) and logs out again. This is already handled by the previous commit. Yes, this point is odd. If you first do $ sudo -u $OTHER_USER nmcli connection up $PROFILE the profile activates despite not having a session. If you then $ ssh guest@localhost nmcli device you'll still see the profile active. However, the moment the SSH session ends, a session closes and the profile disconnects. It's unclear, how to solve that any better. I think, a user who cares about this, should not activate the profile without having a session in the first place. There are quite some special cases, in particular with internal activations. In those cases we need to decide whether to bind the activation to the profile's visibility. Also, expose the "bind" setting in the D-Bus API. Note, that in the future this flag may be modified via D-Bus API. Like we may also add related API that allows to tweak the lifetime of the activation. Also, I think we broke handling of connection visiblity with 37e8c53eeed "core: Introduce helper class to track connection keep alive". This should be fixed now too, with improved behavior. Fixes: 37e8c53eeed579fe34a68819cd12f3295d581394 https://bugzilla.redhat.com/show_bug.cgi?id=1530977
2018-11-21 13:30:16 +01:00
static inline void
nm_active_connection_set_state_flags_clear(NMActiveConnection *self,
core: improve and fix keeping connection active based on "connection.permissions" By setting "connection.permissions", a profile is restricted to a particular user. That means for example, that another user cannot see, modify, delete, activate or deactivate the profile. It also means, that the profile will only autoconnect when the user is logged in (has a session). Note that root is always able to activate the profile. Likewise, the user is also allowed to manually activate the own profile, even if no session currently exists (which can easily happen with `sudo`). When the user logs out (the session goes away), we want do disconnect the profile, however there are conflicting goals here: 1) if the profile was activate by root user, then logging out the user should not disconnect the profile. The patch fixes that by not binding the activation to the connection, if the activation is done by the root user. 2) if the profile was activated by the owner when it had no session, then it should stay alive until the user logs in (once) and logs out again. This is already handled by the previous commit. Yes, this point is odd. If you first do $ sudo -u $OTHER_USER nmcli connection up $PROFILE the profile activates despite not having a session. If you then $ ssh guest@localhost nmcli device you'll still see the profile active. However, the moment the SSH session ends, a session closes and the profile disconnects. It's unclear, how to solve that any better. I think, a user who cares about this, should not activate the profile without having a session in the first place. There are quite some special cases, in particular with internal activations. In those cases we need to decide whether to bind the activation to the profile's visibility. Also, expose the "bind" setting in the D-Bus API. Note, that in the future this flag may be modified via D-Bus API. Like we may also add related API that allows to tweak the lifetime of the activation. Also, I think we broke handling of connection visiblity with 37e8c53eeed "core: Introduce helper class to track connection keep alive". This should be fixed now too, with improved behavior. Fixes: 37e8c53eeed579fe34a68819cd12f3295d581394 https://bugzilla.redhat.com/show_bug.cgi?id=1530977
2018-11-21 13:30:16 +01:00
NMActivationStateFlags state_flags)
{
nm_active_connection_set_state_flags_full(self, NM_ACTIVATION_STATE_FLAG_NONE, state_flags);
}
NMDevice *nm_active_connection_get_device(NMActiveConnection *self);
gboolean nm_active_connection_set_device(NMActiveConnection *self, NMDevice *device);
NMAuthSubject *nm_active_connection_get_subject(NMActiveConnection *self);
gboolean nm_active_connection_get_user_requested(NMActiveConnection *self);
NMActiveConnection *nm_active_connection_get_controller(NMActiveConnection *self);
gboolean nm_active_connection_get_controller_ready(NMActiveConnection *self);
void nm_active_connection_set_controller(NMActiveConnection *self, NMActiveConnection *controller);
void nm_active_connection_set_controller_dev(NMActiveConnection *self, NMDevice *controller_dev);
void nm_active_connection_set_parent(NMActiveConnection *self, NMActiveConnection *parent);
NMActivationType nm_active_connection_get_activation_type(NMActiveConnection *self);
NMActivationReason nm_active_connection_get_activation_reason(NMActiveConnection *self);
NMKeepAlive *nm_active_connection_get_keep_alive(NMActiveConnection *self);
void nm_active_connection_clear_secrets(NMActiveConnection *self);
#endif /* __NETWORKMANAGER_ACTIVE_CONNECTION_H__ */