2020-12-23 22:21:36 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
2019-09-25 13:13:40 +02:00
|
|
|
/*
|
2008-11-03 04:13:42 +00:00
|
|
|
* Copyright (C) 2007 - 2008 Novell, Inc.
|
2010-05-22 08:55:30 -07:00
|
|
|
* Copyright (C) 2007 - 2010 Red Hat, Inc.
|
2008-11-03 04:13:42 +00:00
|
|
|
*/
|
|
|
|
|
|
all: fix up multiple-include-guard defines
Previously, src/nm-ip4-config.h, libnm/nm-ip4-config.h, and
libnm-glib/nm-ip4-config.h all used "NM_IP4_CONFIG_H" as an include
guard, which meant that nm-test-utils.h could not tell which of them
was being included (and so, eg, if you tried to include
nm-ip4-config.h in a libnm test, it would fail to compile because
nm-test-utils.h was referring to symbols in src/nm-ip4-config.h).
Fix this by changing the include guards in the non-API-stable parts of
the tree:
- libnm-glib/nm-ip4-config.h remains NM_IP4_CONFIG_H
- libnm/nm-ip4-config.h now uses __NM_IP4_CONFIG_H__
- src/nm-ip4-config.h now uses __NETWORKMANAGER_IP4_CONFIG_H__
And likewise for all other headers.
The two non-"nm"-prefixed headers, libnm/NetworkManager.h and
src/NetworkManagerUtils.h are now __NETWORKMANAGER_H__ and
__NETWORKMANAGER_UTILS_H__ respectively, which, while not entirely
consistent with the general scheme, do still mostly make sense in
isolation.
2014-08-13 14:10:11 -04:00
|
|
|
#ifndef __NETWORKMANAGER_MANAGER_H__
|
|
|
|
|
#define __NETWORKMANAGER_MANAGER_H__
|
2007-02-08 15:34:26 +00:00
|
|
|
|
2016-11-21 00:43:52 +01:00
|
|
|
#include "settings/nm-settings-connection.h"
|
2018-04-06 16:40:23 +02:00
|
|
|
#include "c-list/src/c-list.h"
|
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
|
|
|
#include "nm-dbus-manager.h"
|
2007-02-08 15:34:26 +00:00
|
|
|
|
|
|
|
|
#define NM_TYPE_MANAGER (nm_manager_get_type())
|
|
|
|
|
#define NM_MANAGER(obj) (G_TYPE_CHECK_INSTANCE_CAST((obj), NM_TYPE_MANAGER, NMManager))
|
|
|
|
|
#define NM_MANAGER_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST((klass), NM_TYPE_MANAGER, NMManagerClass))
|
|
|
|
|
#define NM_IS_MANAGER(obj) (G_TYPE_CHECK_INSTANCE_TYPE((obj), NM_TYPE_MANAGER))
|
2012-07-27 13:15:54 +02:00
|
|
|
#define NM_IS_MANAGER_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass), NM_TYPE_MANAGER))
|
2007-02-08 15:34:26 +00:00
|
|
|
#define NM_MANAGER_GET_CLASS(obj) \
|
|
|
|
|
(G_TYPE_INSTANCE_GET_CLASS((obj), NM_TYPE_MANAGER, NMManagerClass))
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2010-09-27 10:34:56 +02:00
|
|
|
#define NM_MANAGER_VERSION "version"
|
2016-09-15 23:34:24 +03:00
|
|
|
#define NM_MANAGER_CAPABILITIES "capabilities"
|
2010-05-22 08:55:30 -07:00
|
|
|
#define NM_MANAGER_STATE "state"
|
2013-08-13 17:45:34 -04:00
|
|
|
#define NM_MANAGER_STARTUP "startup"
|
2010-05-22 08:55:30 -07:00
|
|
|
#define NM_MANAGER_NETWORKING_ENABLED "networking-enabled"
|
|
|
|
|
#define NM_MANAGER_WIRELESS_ENABLED "wireless-enabled"
|
|
|
|
|
#define NM_MANAGER_WIRELESS_HARDWARE_ENABLED "wireless-hardware-enabled"
|
|
|
|
|
#define NM_MANAGER_WWAN_ENABLED "wwan-enabled"
|
|
|
|
|
#define NM_MANAGER_WWAN_HARDWARE_ENABLED "wwan-hardware-enabled"
|
2011-01-02 17:24:23 -06:00
|
|
|
#define NM_MANAGER_WIMAX_ENABLED "wimax-enabled"
|
|
|
|
|
#define NM_MANAGER_WIMAX_HARDWARE_ENABLED "wimax-hardware-enabled"
|
2022-03-21 10:19:37 +01:00
|
|
|
#define NM_MANAGER_RADIO_FLAGS "radio-flags"
|
2010-05-22 08:55:30 -07:00
|
|
|
#define NM_MANAGER_ACTIVE_CONNECTIONS "active-connections"
|
2013-07-30 16:31:31 -04:00
|
|
|
#define NM_MANAGER_CONNECTIVITY "connectivity"
|
2017-08-09 15:20:04 +08:00
|
|
|
#define NM_MANAGER_CONNECTIVITY_CHECK_AVAILABLE "connectivity-check-available"
|
|
|
|
|
#define NM_MANAGER_CONNECTIVITY_CHECK_ENABLED "connectivity-check-enabled"
|
2019-07-22 15:55:15 +01:00
|
|
|
#define NM_MANAGER_CONNECTIVITY_CHECK_URI "connectivity-check-uri"
|
2013-08-22 13:06:51 -04:00
|
|
|
#define NM_MANAGER_PRIMARY_CONNECTION "primary-connection"
|
2014-10-23 13:56:52 -04:00
|
|
|
#define NM_MANAGER_PRIMARY_CONNECTION_TYPE "primary-connection-type"
|
2013-08-22 13:06:51 -04:00
|
|
|
#define NM_MANAGER_ACTIVATING_CONNECTION "activating-connection"
|
2014-01-08 12:18:33 -06:00
|
|
|
#define NM_MANAGER_DEVICES "devices"
|
2015-06-03 09:15:24 +02:00
|
|
|
#define NM_MANAGER_METERED "metered"
|
2015-07-03 11:06:39 +02:00
|
|
|
#define NM_MANAGER_GLOBAL_DNS_CONFIGURATION "global-dns-configuration"
|
2014-10-06 11:21:54 -05:00
|
|
|
#define NM_MANAGER_ALL_DEVICES "all-devices"
|
2017-10-21 16:05:14 +02:00
|
|
|
#define NM_MANAGER_CHECKPOINTS "checkpoints"
|
2010-05-22 08:55:30 -07:00
|
|
|
|
2008-09-18 Dan Williams <dcbw@redhat.com>
Implement support for honoring configured and automatic hostnames, and for
setting the configured hostname.
* introspection/nm-ip4-config.xml
src/nm-ip4-config.c
src/nm-ip4-config.h
src/dhcp-manager/nm-dhcp-manager.c
- Remove useless hostname property; it's not really part of the IPv4
config
* introspection/nm-settings-system.xml
libnm-glib/nm-dbus-settings-system.c
libnm-glib/nm-dbus-settings-system.h
- Add SetHostname() call to system settings D-Bus interface
- Add Hostname property to system settings D-Bus interface
- (nm_dbus_settings_system_save_hostname,
nm_dbus_settings_system_get_hostname): implement
* src/nm-device.c
src/nm-device.h
- (nm_device_get_dhcp4_config): implement
* src/nm-manager.c
src/nm-manager.h
- Fetch and track system settings service hostname changes, and proxy
the changes via a GObject property of the manager
* system-settings/src/nm-system-config-interface.c
system-settings/src/nm-system-config-interface.h
- Replace nm_system_config_interface_supports_add() with a capabilities
bitfield
* system-settings/src/nm-system-config-error.c
system-settings/src/nm-system-config-error.h
- Add additional errors
* system-settings/src/dbus-settings.c
system-settings/src/dbus-settings.h
- (get_property, nm_sysconfig_settings_class_init): add hostname
property; first plugin returning a hostname wins
- (impl_settings_add_connection): use plugin capabilities instead of
nm_system_config_interface_supports_add()
- (impl_settings_save_hostname): implement hostname saving
* src/NetworkManagerPolicy.c
- (lookup_thread_run_cb, lookup_thread_worker, lookup_thread_new,
lookup_thread_die): implement an asynchronous hostname lookup thread
which given an IPv4 address tries to look up the hostname for that
address with reverse DNS
- (get_best_device): split out best device code from
update_routing_and_dns()
- (update_etc_hosts): update /etc/hosts with the machine's new hostname
to preserve the 127.0.0.1 reverse mapping that so many things require
- (set_system_hostname): set a given hostname
- (update_system_hostname): implement hostname policy; a configured
hostname (from the system settings service) is used if available,
otherwise an automatically determined hostname from DHCP, VPN, etc.
If there was no automatically determined hostname, reverse DNS of
the best device's IP address will be used, and as a last resort the
hostname 'localhost.localdomain' is set.
- (update_routing_and_dns): use get_best_device(); update the system
hostname when the network config changes
- (hostname_changed): update system hostname if the system settings
service signals a hostname change
- (nm_policy_new): list for system settings service hostname changes
- (nm_policy_destroy): ensure that an in-progress hostname lookup thread
gets told to die
* system-settings/plugins/keyfile/plugin.c
system-settings/plugins/ifcfg-suse/plugin.c
- (get_property, sc_plugin_ifcfg_class_init): implement hostname and
capabilities properties
* system-settings/plugins/ifcfg-fedora/shvar.c
- (svOpenFile): re-enable R/W access of ifcfg files since the plugin
writes out /etc/sysconfig/network now
* system-settings/plugins/ifcfg-fedora/plugin.c
- (plugin_get_hostname): get hostname from /etc/sysconfig/network
- (plugin_set_hostname): save hostname to /etc/sysconfig/network
- (sc_network_changed_cb): handle changes to /etc/sysconfig/network
- (sc_plugin_ifcfg_init): monitor /etc/sysconfig/network for changes
- (get_property, set_property, sc_plugin_ifcfg_class_init): implement
hostname get/set and capabilities get
git-svn-id: http://svn-archive.gnome.org/svn/NetworkManager/trunk@4077 4912f4e0-d625-0410-9fb7-b9a5a253dbdc
2008-09-18 15:16:44 +00:00
|
|
|
/* Not exported */
|
2009-10-20 15:25:04 -07:00
|
|
|
#define NM_MANAGER_SLEEPING "sleeping"
|
2008-09-18 Dan Williams <dcbw@redhat.com>
Implement support for honoring configured and automatic hostnames, and for
setting the configured hostname.
* introspection/nm-ip4-config.xml
src/nm-ip4-config.c
src/nm-ip4-config.h
src/dhcp-manager/nm-dhcp-manager.c
- Remove useless hostname property; it's not really part of the IPv4
config
* introspection/nm-settings-system.xml
libnm-glib/nm-dbus-settings-system.c
libnm-glib/nm-dbus-settings-system.h
- Add SetHostname() call to system settings D-Bus interface
- Add Hostname property to system settings D-Bus interface
- (nm_dbus_settings_system_save_hostname,
nm_dbus_settings_system_get_hostname): implement
* src/nm-device.c
src/nm-device.h
- (nm_device_get_dhcp4_config): implement
* src/nm-manager.c
src/nm-manager.h
- Fetch and track system settings service hostname changes, and proxy
the changes via a GObject property of the manager
* system-settings/src/nm-system-config-interface.c
system-settings/src/nm-system-config-interface.h
- Replace nm_system_config_interface_supports_add() with a capabilities
bitfield
* system-settings/src/nm-system-config-error.c
system-settings/src/nm-system-config-error.h
- Add additional errors
* system-settings/src/dbus-settings.c
system-settings/src/dbus-settings.h
- (get_property, nm_sysconfig_settings_class_init): add hostname
property; first plugin returning a hostname wins
- (impl_settings_add_connection): use plugin capabilities instead of
nm_system_config_interface_supports_add()
- (impl_settings_save_hostname): implement hostname saving
* src/NetworkManagerPolicy.c
- (lookup_thread_run_cb, lookup_thread_worker, lookup_thread_new,
lookup_thread_die): implement an asynchronous hostname lookup thread
which given an IPv4 address tries to look up the hostname for that
address with reverse DNS
- (get_best_device): split out best device code from
update_routing_and_dns()
- (update_etc_hosts): update /etc/hosts with the machine's new hostname
to preserve the 127.0.0.1 reverse mapping that so many things require
- (set_system_hostname): set a given hostname
- (update_system_hostname): implement hostname policy; a configured
hostname (from the system settings service) is used if available,
otherwise an automatically determined hostname from DHCP, VPN, etc.
If there was no automatically determined hostname, reverse DNS of
the best device's IP address will be used, and as a last resort the
hostname 'localhost.localdomain' is set.
- (update_routing_and_dns): use get_best_device(); update the system
hostname when the network config changes
- (hostname_changed): update system hostname if the system settings
service signals a hostname change
- (nm_policy_new): list for system settings service hostname changes
- (nm_policy_destroy): ensure that an in-progress hostname lookup thread
gets told to die
* system-settings/plugins/keyfile/plugin.c
system-settings/plugins/ifcfg-suse/plugin.c
- (get_property, sc_plugin_ifcfg_class_init): implement hostname and
capabilities properties
* system-settings/plugins/ifcfg-fedora/shvar.c
- (svOpenFile): re-enable R/W access of ifcfg files since the plugin
writes out /etc/sysconfig/network now
* system-settings/plugins/ifcfg-fedora/plugin.c
- (plugin_get_hostname): get hostname from /etc/sysconfig/network
- (plugin_set_hostname): save hostname to /etc/sysconfig/network
- (sc_network_changed_cb): handle changes to /etc/sysconfig/network
- (sc_plugin_ifcfg_init): monitor /etc/sysconfig/network for changes
- (get_property, set_property, sc_plugin_ifcfg_class_init): implement
hostname get/set and capabilities get
git-svn-id: http://svn-archive.gnome.org/svn/NetworkManager/trunk@4077 4912f4e0-d625-0410-9fb7-b9a5a253dbdc
2008-09-18 15:16:44 +00:00
|
|
|
|
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
|
|
|
/* Signals */
|
2016-04-04 14:17:09 +02:00
|
|
|
#define NM_MANAGER_DEVICE_ADDED "device-added"
|
|
|
|
|
#define NM_MANAGER_DEVICE_REMOVED "device-removed"
|
2021-01-19 18:57:58 +01:00
|
|
|
#define NM_MANAGER_DEVICE_IFINDEX_CHANGED "device-ifindex-changed"
|
2016-04-04 14:17:09 +02:00
|
|
|
#define NM_MANAGER_USER_PERMISSIONS_CHANGED "user-permissions-changed"
|
|
|
|
|
|
2012-08-22 18:33:17 -05:00
|
|
|
#define NM_MANAGER_ACTIVE_CONNECTION_ADDED "active-connection-added"
|
|
|
|
|
#define NM_MANAGER_ACTIVE_CONNECTION_REMOVED "active-connection-removed"
|
2014-10-29 09:12:18 -05:00
|
|
|
#define NM_MANAGER_CONFIGURE_QUIT "configure-quit"
|
2016-04-04 14:17:09 +02:00
|
|
|
#define NM_MANAGER_INTERNAL_DEVICE_ADDED "internal-device-added"
|
|
|
|
|
#define NM_MANAGER_INTERNAL_DEVICE_REMOVED "internal-device-removed"
|
2012-08-22 18:33:17 -05:00
|
|
|
|
2007-02-08 15:34:26 +00:00
|
|
|
GType nm_manager_get_type(void);
|
|
|
|
|
|
2015-08-03 09:26:31 -04:00
|
|
|
/* nm_manager_setup() should only be used by main.c */
|
2015-11-06 16:45:27 +01:00
|
|
|
NMManager *nm_manager_setup(void);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2014-09-05 13:15:44 -05:00
|
|
|
NMManager *nm_manager_get(void);
|
2019-08-23 16:45:39 +02:00
|
|
|
#define NM_MANAGER_GET (nm_manager_get())
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2015-08-03 09:26:31 -04:00
|
|
|
gboolean nm_manager_start(NMManager *manager, GError **error);
|
2014-10-29 09:12:18 -05:00
|
|
|
void nm_manager_stop(NMManager *manager);
|
2014-09-05 13:15:44 -05:00
|
|
|
NMState nm_manager_get_state(NMManager *manager);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-11-23 21:30:09 +01:00
|
|
|
const CList *nm_manager_get_active_connections(NMManager *manager);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2020-11-04 13:53:57 +01:00
|
|
|
/* From least recently activated */
|
2017-11-23 21:30:09 +01:00
|
|
|
#define nm_manager_for_each_active_connection(manager, iter, tmp_list) \
|
|
|
|
|
for (tmp_list = nm_manager_get_active_connections(manager), \
|
|
|
|
|
iter = c_list_entry(tmp_list->next, NMActiveConnection, active_connections_lst); \
|
|
|
|
|
({ \
|
2018-03-29 08:52:45 +02:00
|
|
|
const gboolean _has_next = (&iter->active_connections_lst != tmp_list); \
|
2017-11-23 21:30:09 +01:00
|
|
|
\
|
|
|
|
|
if (!_has_next) \
|
|
|
|
|
iter = NULL; \
|
|
|
|
|
_has_next; \
|
|
|
|
|
}); \
|
|
|
|
|
iter = c_list_entry(iter->active_connections_lst.next, \
|
|
|
|
|
NMActiveConnection, \
|
|
|
|
|
active_connections_lst))
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2020-11-04 13:53:57 +01:00
|
|
|
/* From most recently activated */
|
|
|
|
|
#define nm_manager_for_each_active_connection_prev(manager, iter, tmp_list) \
|
|
|
|
|
for (tmp_list = nm_manager_get_active_connections(manager), \
|
|
|
|
|
iter = c_list_entry(tmp_list->prev, NMActiveConnection, active_connections_lst); \
|
|
|
|
|
({ \
|
|
|
|
|
const gboolean _has_prev = (&iter->active_connections_lst != tmp_list); \
|
|
|
|
|
\
|
|
|
|
|
if (!_has_prev) \
|
|
|
|
|
iter = NULL; \
|
|
|
|
|
_has_prev; \
|
|
|
|
|
}); \
|
|
|
|
|
iter = c_list_entry(iter->active_connections_lst.prev, \
|
|
|
|
|
NMActiveConnection, \
|
|
|
|
|
active_connections_lst))
|
|
|
|
|
|
|
|
|
|
/* From least recently activated */
|
2018-11-21 10:47:51 +01:00
|
|
|
#define nm_manager_for_each_active_connection_safe(manager, iter, tmp_list, iter_safe) \
|
|
|
|
|
for (tmp_list = nm_manager_get_active_connections(manager), iter_safe = tmp_list->next; ({ \
|
|
|
|
|
if (iter_safe != tmp_list) { \
|
|
|
|
|
iter = c_list_entry(iter_safe, NMActiveConnection, active_connections_lst); \
|
|
|
|
|
iter_safe = iter_safe->next; \
|
|
|
|
|
} else \
|
|
|
|
|
iter = NULL; \
|
|
|
|
|
(iter != NULL); \
|
|
|
|
|
});)
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-02-03 15:13:03 +01:00
|
|
|
NMSettingsConnection **nm_manager_get_activatable_connections(NMManager *manager,
|
2018-06-27 14:20:57 +02:00
|
|
|
gboolean for_auto_activation,
|
|
|
|
|
gboolean sort,
|
2021-11-09 13:28:54 +01:00
|
|
|
guint *out_len);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-06 21:09:58 +02:00
|
|
|
void nm_manager_write_device_state_all(NMManager *manager);
|
2020-03-04 13:38:49 +01:00
|
|
|
gboolean nm_manager_write_device_state(NMManager *manager, NMDevice *device, int *out_ifindex);
|
2016-09-23 17:36:21 +02:00
|
|
|
|
2007-06-22 15:09:02 +00:00
|
|
|
/* Device handling */
|
|
|
|
|
|
core: track devices in manager via embedded CList
Instead of using a GSList for tracking the devices, use a CList.
I think a CList is in most cases the more suitable data structure
then GSList:
- you can find out in O(1) whether the object is linked. That
is nice, for example to assert in NMDevice's destructor that
the object was unlinked, and we will use that later in
nm_manager_get_device_by_path().
- you can unlink the element in O(1) and you can unlink the
element without having access to the link's head
- Contrary to GSList, this does not require an extra slice
allocation for the link node. It quite possibliy consumes
slightly less memory because the CList structure is embedded
in a struct that we already allocate. Even if slice allocation
would be perfect to only consume 2*sizeof(gpointer) for the link
note, it would at most be as-good as CList. Quite possibly,
there is an overhead though.
- CList possibly has better memory locality, because the link
structure and the data are close to each other.
Something which could be seen as disavantage, is that with CList
one device can only be tracked in one NMManager instance at a time.
But that is fine. There exists only one NMManager instance for now,
and even if we would ever introduce multiple managers, we probably
would not associate one NMDevice instance with multiple managers.
The advantages are arguably not huge, but CList is IMHO clearly the
more suited data structure. No need to stick to a suboptimal data
structure for the job. Refactor it.
2018-03-23 21:51:07 +01:00
|
|
|
const CList *nm_manager_get_devices(NMManager *manager);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-03-29 08:52:45 +02:00
|
|
|
#define nm_manager_for_each_device(manager, iter, tmp_list) \
|
|
|
|
|
for (tmp_list = nm_manager_get_devices(manager), \
|
|
|
|
|
iter = c_list_entry(tmp_list->next, NMDevice, devices_lst); \
|
|
|
|
|
({ \
|
|
|
|
|
const gboolean _has_next = (&iter->devices_lst != tmp_list); \
|
|
|
|
|
\
|
|
|
|
|
if (!_has_next) \
|
|
|
|
|
iter = NULL; \
|
|
|
|
|
_has_next; \
|
|
|
|
|
}); \
|
|
|
|
|
iter = c_list_entry(iter->devices_lst.next, NMDevice, devices_lst))
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-11-21 10:47:51 +01:00
|
|
|
#define nm_manager_for_each_device_safe(manager, iter, tmp_list, iter_safe) \
|
|
|
|
|
for (tmp_list = nm_manager_get_devices(manager), iter_safe = tmp_list->next; ({ \
|
|
|
|
|
if (iter_safe != tmp_list) { \
|
|
|
|
|
iter = c_list_entry(iter_safe, NMDevice, devices_lst); \
|
|
|
|
|
iter_safe = iter_safe->next; \
|
|
|
|
|
} else \
|
|
|
|
|
iter = NULL; \
|
|
|
|
|
(iter != NULL); \
|
|
|
|
|
});)
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2014-09-05 13:15:44 -05:00
|
|
|
NMDevice *nm_manager_get_device_by_ifindex(NMManager *manager, int ifindex);
|
2016-07-01 12:11:01 +02:00
|
|
|
NMDevice *nm_manager_get_device_by_path(NMManager *manager, const char *path);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
device: generate unique default route-metrics per interface
In the past we had NMDefaultRouteManager which would coordinate adding
the default-route with identical metrics. That especially happened, when
activating two devices of the same type, without explicitly specifying
ipv4.route-metric. For example, with ethernet devices, the routes on
both interfaces would get a metric of 100.
Coordinating routes was especially necessary, because we added
routes with NLM_F_EXCL flag, akin to `ip route replace`. We not
only had to avoid that activating two devices in NetworkManager would
result in a fight over the default-route, but more importently
to preserve externally added default-routes on unmanaged interfaces.
NMDefaultRouteManager would ensure that in case of duplicate
metrics, that the device that activated first would keep the
best default-route. It would do so by bumping the metric
of the second device to find a unused metric. The bumping itself
was not very important -- MDefaultRouteManager could also just not
configure any default-routes that show up as second, the result
would be quite similar. More important was to keep the best
default-route on the first activating device until the device
deactivates or a device activates that really has a better
default-route..
Likewise, NMRouteManager would globally manage non-default-routes.
It would not do any bumping of metrics, but it would also ensure that the routes
of the device that activates first are not overwritten by a device activating
later.
However, the `ip route replace` approach has downsides, especially
that it messes with routes on other interfaces, interfaces that are
possibly not managed by NetworkManager. Another downside is, that
binding a socket to an interface might not result in correct
routes, because the route might just not be there (in case of
NMRouteManager, which wouldn't configure duplicate routes by bumping
their metric).
Since commit 77ec302714795f905301d500b9aab6c88001f32e we would no longer
use NLM_F_EXCL, but add routes akin to `ip route append`. When
activating for example two ethernet devices with no explict route
metric configuration, there are two routes like
default via 10.16.122.254 dev eth0 proto dhcp metric 100
default via 192.168.100.1 dev eth1 proto dhcp metric 100
This does not only affect default routes. In case of a multi-homing
setup you'd get
192.168.100.0/24 dev eth0 proto kernel scope link src 192.168.100.1 metric 100
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.1 metric 100
but it's visible the most for default-routes.
Note that we would append the routes that are activated later, as the order
of `ip route show` confirms. One might hence expect, that kernel selects
a route based on the order in the routing tables. However, that isn't
the case, and activating the second interface will non-deterministically
re-route traffic via the new interface. That will interfere badly with
with NAT, stateful firewalls, and existing connections (like TCP).
The solution is to have NMManager keep a global index of the default route-metrics
currently in use. So, instead of determining the default-route metric based solely
on the device-type, we now in addition generate default metrics that do not
overlap. For example, if you activate eth0 first, it gets route-metric 100,
and if you then activate eth1, it gets 101. Note that if you deactivate
and re-activate eth0, then it will get route-metric 102, because the
best route should stick on eth1 (which reserves the range 100 to 101).
Note that when a connection explititly selects a particular metric, then that
choice is honored (contrary to NMDefaultRouteManager which was more concerned
with avoiding conflicts, then keeping the exact metric).
https://bugzilla.redhat.com/show_bug.cgi?id=1505893
2017-12-05 16:32:04 +01:00
|
|
|
guint32
|
|
|
|
|
nm_manager_device_route_metric_reserve(NMManager *self, int ifindex, NMDeviceType device_type);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
device: generate unique default route-metrics per interface
In the past we had NMDefaultRouteManager which would coordinate adding
the default-route with identical metrics. That especially happened, when
activating two devices of the same type, without explicitly specifying
ipv4.route-metric. For example, with ethernet devices, the routes on
both interfaces would get a metric of 100.
Coordinating routes was especially necessary, because we added
routes with NLM_F_EXCL flag, akin to `ip route replace`. We not
only had to avoid that activating two devices in NetworkManager would
result in a fight over the default-route, but more importently
to preserve externally added default-routes on unmanaged interfaces.
NMDefaultRouteManager would ensure that in case of duplicate
metrics, that the device that activated first would keep the
best default-route. It would do so by bumping the metric
of the second device to find a unused metric. The bumping itself
was not very important -- MDefaultRouteManager could also just not
configure any default-routes that show up as second, the result
would be quite similar. More important was to keep the best
default-route on the first activating device until the device
deactivates or a device activates that really has a better
default-route..
Likewise, NMRouteManager would globally manage non-default-routes.
It would not do any bumping of metrics, but it would also ensure that the routes
of the device that activates first are not overwritten by a device activating
later.
However, the `ip route replace` approach has downsides, especially
that it messes with routes on other interfaces, interfaces that are
possibly not managed by NetworkManager. Another downside is, that
binding a socket to an interface might not result in correct
routes, because the route might just not be there (in case of
NMRouteManager, which wouldn't configure duplicate routes by bumping
their metric).
Since commit 77ec302714795f905301d500b9aab6c88001f32e we would no longer
use NLM_F_EXCL, but add routes akin to `ip route append`. When
activating for example two ethernet devices with no explict route
metric configuration, there are two routes like
default via 10.16.122.254 dev eth0 proto dhcp metric 100
default via 192.168.100.1 dev eth1 proto dhcp metric 100
This does not only affect default routes. In case of a multi-homing
setup you'd get
192.168.100.0/24 dev eth0 proto kernel scope link src 192.168.100.1 metric 100
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.1 metric 100
but it's visible the most for default-routes.
Note that we would append the routes that are activated later, as the order
of `ip route show` confirms. One might hence expect, that kernel selects
a route based on the order in the routing tables. However, that isn't
the case, and activating the second interface will non-deterministically
re-route traffic via the new interface. That will interfere badly with
with NAT, stateful firewalls, and existing connections (like TCP).
The solution is to have NMManager keep a global index of the default route-metrics
currently in use. So, instead of determining the default-route metric based solely
on the device-type, we now in addition generate default metrics that do not
overlap. For example, if you activate eth0 first, it gets route-metric 100,
and if you then activate eth1, it gets 101. Note that if you deactivate
and re-activate eth0, then it will get route-metric 102, because the
best route should stick on eth1 (which reserves the range 100 to 101).
Note that when a connection explititly selects a particular metric, then that
choice is honored (contrary to NMDefaultRouteManager which was more concerned
with avoiding conflicts, then keeping the exact metric).
https://bugzilla.redhat.com/show_bug.cgi?id=1505893
2017-12-05 16:32:04 +01:00
|
|
|
void nm_manager_device_route_metric_clear(NMManager *self, int ifindex);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
char *nm_manager_get_connection_iface(NMManager *self,
|
2016-02-16 15:07:49 +01:00
|
|
|
NMConnection *connection,
|
2021-11-09 13:28:54 +01:00
|
|
|
NMDevice **out_parent,
|
|
|
|
|
const char **out_parent_spec,
|
|
|
|
|
GError **error);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-11-28 12:32:03 +00:00
|
|
|
const char *nm_manager_iface_for_uuid(NMManager *self, const char *uuid);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
NMActiveConnection *nm_manager_activate_connection(NMManager *manager,
|
|
|
|
|
NMSettingsConnection *connection,
|
|
|
|
|
NMConnection *applied_connection,
|
|
|
|
|
const char *specific_object,
|
|
|
|
|
NMDevice *device,
|
|
|
|
|
NMAuthSubject *subject,
|
2017-03-07 11:04:36 +01:00
|
|
|
NMActivationType activation_type,
|
2018-03-28 17:18:04 +02:00
|
|
|
NMActivationReason activation_reason,
|
core: improve and fix keeping connection active based on "connection.permissions"
By setting "connection.permissions", a profile is restricted to a
particular user.
That means for example, that another user cannot see, modify, delete,
activate or deactivate the profile. It also means, that the profile
will only autoconnect when the user is logged in (has a session).
Note that root is always able to activate the profile. Likewise, the
user is also allowed to manually activate the own profile, even if no
session currently exists (which can easily happen with `sudo`).
When the user logs out (the session goes away), we want do disconnect
the profile, however there are conflicting goals here:
1) if the profile was activate by root user, then logging out the user
should not disconnect the profile. The patch fixes that by not
binding the activation to the connection, if the activation is done
by the root user.
2) if the profile was activated by the owner when it had no session,
then it should stay alive until the user logs in (once) and logs
out again. This is already handled by the previous commit.
Yes, this point is odd. If you first do
$ sudo -u $OTHER_USER nmcli connection up $PROFILE
the profile activates despite not having a session. If you then
$ ssh guest@localhost nmcli device
you'll still see the profile active. However, the moment the SSH session
ends, a session closes and the profile disconnects. It's unclear, how to
solve that any better. I think, a user who cares about this, should not
activate the profile without having a session in the first place.
There are quite some special cases, in particular with internal
activations. In those cases we need to decide whether to bind the
activation to the profile's visibility.
Also, expose the "bind" setting in the D-Bus API. Note, that in the future
this flag may be modified via D-Bus API. Like we may also add related API
that allows to tweak the lifetime of the activation.
Also, I think we broke handling of connection visiblity with 37e8c53eeed
"core: Introduce helper class to track connection keep alive". This
should be fixed now too, with improved behavior.
Fixes: 37e8c53eeed579fe34a68819cd12f3295d581394
https://bugzilla.redhat.com/show_bug.cgi?id=1530977
2018-11-21 13:30:16 +01:00
|
|
|
NMActivationStateFlags initial_state_flags,
|
2021-11-09 13:28:54 +01:00
|
|
|
GError **error);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
gboolean nm_manager_deactivate_connection(NMManager *manager,
|
2017-01-26 14:20:01 +01:00
|
|
|
NMActiveConnection *active,
|
2014-09-05 13:15:44 -05:00
|
|
|
NMDeviceStateReason reason,
|
2021-11-09 13:28:54 +01:00
|
|
|
GError **error);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-09-15 23:34:24 +03:00
|
|
|
void nm_manager_set_capability(NMManager *self, NMCapability cap);
|
2021-01-19 18:57:58 +01:00
|
|
|
void nm_manager_emit_device_ifindex_changed(NMManager *self, NMDevice *device);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-10-10 15:00:59 +02:00
|
|
|
NMDevice *nm_manager_get_device(NMManager *self, const char *ifname, NMDeviceType device_type);
|
|
|
|
|
gboolean nm_manager_remove_device(NMManager *self, const char *ifname, NMDeviceType device_type);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
void nm_manager_dbus_set_property_handle(NMDBusObject *obj,
|
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
|
|
|
const NMDBusInterfaceInfoExtended *interface_info,
|
2021-11-09 13:28:54 +01:00
|
|
|
const NMDBusPropertyInfoExtended *property_info,
|
|
|
|
|
GDBusConnection *connection,
|
|
|
|
|
const char *sender,
|
|
|
|
|
GDBusMethodInvocation *invocation,
|
|
|
|
|
GVariant *value,
|
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
|
|
|
gpointer user_data);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-11-29 10:58:59 +01:00
|
|
|
NMMetered nm_manager_get_metered(NMManager *self);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2020-07-04 11:37:01 +03:00
|
|
|
void nm_manager_notify_device_availability_maybe_changed(NMManager *self);
|
bluetooth: refactor BlueZ handling and let NMBluezManager cache ObjectManager data
This is a complete refactoring of the bluetooth code.
Now that BlueZ 4 support was dropped, the separation of NMBluezManager
and NMBluez5Manager makes no sense. They should be merged.
At that point, notice that BlueZ 5's D-Bus API is fully centered around
D-Bus's ObjectManager interface. Using that interface, we basically only
call GetManagedObjects() once and register to InterfacesAdded,
InterfacesRemoved and PropertiesChanged signals. There is no need to
fetch individual properties ever.
Note how NMBluezDevice used to query the D-Bus properties itself by
creating a GDBusProxy. This is redundant, because when using the ObjectManager
interfaces, we have all information already.
Instead, let NMBluezManager basically become the client-side cache of
all of BlueZ's ObjectManager interface. NMBluezDevice was mostly concerned
about caching the D-Bus interface's state, tracking suitable profiles
(pan_connection), and moderate between bluez and NMDeviceBt.
These tasks don't get simpler by moving them to a seprate file. Let them
also be handled by NMBluezManager.
I mean, just look how it was previously: NMBluez5Manager registers to
ObjectManager interface and sees a device appearing. It creates a
NMBluezDevice object and registers to its "initialized" and
"notify:usable" signal. In the meantime, NMBluezDevice fetches the
relevant information from D-Bus (although it was already present in the
data provided by the ObjectManager) and eventually emits these usable
and initialized signals.
Then, NMBlue5Manager emits a "bdaddr-added" signal, for which NMBluezManager
creates the NMDeviceBt instance. NMBluezManager, NMBluez5Manager and
NMBluezDevice are strongly cooperating to the point that it is simpler
to merge them.
This is not mere refactoring. This patch aims to make everything
asynchronously and always cancellable. Also, it aims to fix races
and inconsistencies of the state.
- Registering to a NAP server now waits for the response and delays
activation of the NMDeviceBridge accordingly.
- For NAP connections we now watch the bnep0 interface in platform, and tear
down the device when it goes away. Bluez doesn't send us a notification
on D-Bus in that case.
- Rework establishing a DUN connection. It no longer uses blocking
connect() and does not block until rfcomm device appears. It's
all async now. It also watches the rfcomm file descriptor for
POLLERR/POLLHUP to notice disconnect.
- drop nm_device_factory_emit_component_added() and instead let
NMDeviceBt directly register to the WWan factory's "added" signal.
2019-08-11 10:43:53 +02:00
|
|
|
|
core: add nm_manager_get_dns_manager() getter
nm_dns_manager_get() is already a singleton. So users usually
can just get it whenever they need -- except during shutdown
after the singleton was destroyed. This is usually fine, because
users really should not try to get it late during shutdown.
However, if you subscribe a signal handler on the singleton, then you
will also eventually want to unsubscribe it. While the moment when you
subscribe it is clearly not during late-shutdown, it's not clear how
to ensure that the signal listener gets destroyed before the DNS manager
singleton.
So usually, whenever you are going to subscribe a signal, you need to
make sure that the target object stays alive long enough. Which may
mean to keep a reference to it.
Next, we will have NMDevice subscribe to the singleton. With above said,
that would mean that potentially every NMDevice needs to keep a
reference to the NMDnsManager. That is not best. Also, later NMManager
will face the same problem, because it will also subscribe to
NMDnsManager.
So, instead let NMManager own a reference to the NMDnsManager. This
ensures the lifetimes are properly guarded (NMDevice also references
NMManager already).
Also, access nm_dns_manager_get() lazy on first use, to only initialize
it when needed the first time (which might be quite late).
2022-04-08 11:40:42 +02:00
|
|
|
struct _NMDnsManager;
|
|
|
|
|
|
|
|
|
|
struct _NMDnsManager *nm_manager_get_dns_manager(NMManager *self);
|
|
|
|
|
|
device: implement "auth-request" as async operation nm_manager_device_auth_request()
GObject signals only complicate the code and are less efficient.
Also, NM_DEVICE_AUTH_REQUEST signal really invoked an asynchronous
request. Of course, fundamentally emitting a signal *is* the same as
calling a method. However, implementing this as signal is really not
nice nor best practice. For one, there is a (negligible) overhead emitting
a GObject signal. But what is worse, GObject signals are not as strongly
typed and make it harder to understand what happens.
The signal had the appearance of providing some special decoupling of
NMDevice and NMManager. Of course, in practice, they were not more
decoupled (both forms are the same in nature), but it was harder to
understand how they work together.
Add and call a method nm_manager_device_auth_request() instead. This
has the notion of invoking an asynchronous method. Also, never invoke
the callback synchronously and provide a cancellable. Like every asynchronous
operation, it *must* be cancellable, and callers should make sure to
provide a mechanism to abort.
(cherry picked from commit b50702775f1b7a56ad81fe9be58294a29e805c8f)
2020-04-26 13:59:13 +02:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
void nm_manager_device_auth_request(NMManager *self,
|
|
|
|
|
NMDevice *device,
|
|
|
|
|
GDBusMethodInvocation *context,
|
|
|
|
|
NMConnection *connection,
|
|
|
|
|
const char *permission,
|
device: implement "auth-request" as async operation nm_manager_device_auth_request()
GObject signals only complicate the code and are less efficient.
Also, NM_DEVICE_AUTH_REQUEST signal really invoked an asynchronous
request. Of course, fundamentally emitting a signal *is* the same as
calling a method. However, implementing this as signal is really not
nice nor best practice. For one, there is a (negligible) overhead emitting
a GObject signal. But what is worse, GObject signals are not as strongly
typed and make it harder to understand what happens.
The signal had the appearance of providing some special decoupling of
NMDevice and NMManager. Of course, in practice, they were not more
decoupled (both forms are the same in nature), but it was harder to
understand how they work together.
Add and call a method nm_manager_device_auth_request() instead. This
has the notion of invoking an asynchronous method. Also, never invoke
the callback synchronously and provide a cancellable. Like every asynchronous
operation, it *must* be cancellable, and callers should make sure to
provide a mechanism to abort.
(cherry picked from commit b50702775f1b7a56ad81fe9be58294a29e805c8f)
2020-04-26 13:59:13 +02:00
|
|
|
gboolean allow_interaction,
|
2021-11-09 13:28:54 +01:00
|
|
|
GCancellable *cancellable,
|
device: implement "auth-request" as async operation nm_manager_device_auth_request()
GObject signals only complicate the code and are less efficient.
Also, NM_DEVICE_AUTH_REQUEST signal really invoked an asynchronous
request. Of course, fundamentally emitting a signal *is* the same as
calling a method. However, implementing this as signal is really not
nice nor best practice. For one, there is a (negligible) overhead emitting
a GObject signal. But what is worse, GObject signals are not as strongly
typed and make it harder to understand what happens.
The signal had the appearance of providing some special decoupling of
NMDevice and NMManager. Of course, in practice, they were not more
decoupled (both forms are the same in nature), but it was harder to
understand how they work together.
Add and call a method nm_manager_device_auth_request() instead. This
has the notion of invoking an asynchronous method. Also, never invoke
the callback synchronously and provide a cancellable. Like every asynchronous
operation, it *must* be cancellable, and callers should make sure to
provide a mechanism to abort.
(cherry picked from commit b50702775f1b7a56ad81fe9be58294a29e805c8f)
2020-04-26 13:59:13 +02:00
|
|
|
NMManagerDeviceAuthRequestFunc callback,
|
|
|
|
|
gpointer user_data);
|
|
|
|
|
|
2021-01-25 15:02:35 +01:00
|
|
|
void nm_manager_unblock_failed_ovs_interfaces(NMManager *self);
|
|
|
|
|
|
all: fix up multiple-include-guard defines
Previously, src/nm-ip4-config.h, libnm/nm-ip4-config.h, and
libnm-glib/nm-ip4-config.h all used "NM_IP4_CONFIG_H" as an include
guard, which meant that nm-test-utils.h could not tell which of them
was being included (and so, eg, if you tried to include
nm-ip4-config.h in a libnm test, it would fail to compile because
nm-test-utils.h was referring to symbols in src/nm-ip4-config.h).
Fix this by changing the include guards in the non-API-stable parts of
the tree:
- libnm-glib/nm-ip4-config.h remains NM_IP4_CONFIG_H
- libnm/nm-ip4-config.h now uses __NM_IP4_CONFIG_H__
- src/nm-ip4-config.h now uses __NETWORKMANAGER_IP4_CONFIG_H__
And likewise for all other headers.
The two non-"nm"-prefixed headers, libnm/NetworkManager.h and
src/NetworkManagerUtils.h are now __NETWORKMANAGER_H__ and
__NETWORKMANAGER_UTILS_H__ respectively, which, while not entirely
consistent with the general scheme, do still mostly make sense in
isolation.
2014-08-13 14:10:11 -04:00
|
|
|
#endif /* __NETWORKMANAGER_MANAGER_H__ */
|