NetworkManager/libnm/nm-active-connection.c

790 lines
26 KiB
C
Raw Normal View History

// SPDX-License-Identifier: LGPL-2.1+
/*
* Copyright (C) 2007 - 2014 Red Hat, Inc.
* Copyright (C) 2008 Novell, Inc.
*/
#include "nm-default.h"
#include "nm-active-connection.h"
#include "nm-dbus-interface.h"
#include "nm-object-private.h"
#include "nm-core-internal.h"
#include "nm-device.h"
#include "nm-connection.h"
#include "nm-vpn-connection.h"
#include "nm-dbus-helpers.h"
#include "nm-dhcp4-config.h"
#include "nm-dhcp6-config.h"
#include "nm-ip4-config.h"
#include "nm-ip6-config.h"
#include "nm-remote-connection.h"
/*****************************************************************************/
NM_GOBJECT_PROPERTIES_DEFINE (NMActiveConnection,
PROP_CONNECTION,
PROP_ID,
PROP_UUID,
PROP_TYPE,
PROP_SPECIFIC_OBJECT_PATH,
PROP_DEVICES,
PROP_STATE,
PROP_STATE_FLAGS,
PROP_DEFAULT,
PROP_IP4_CONFIG,
PROP_DHCP4_CONFIG,
PROP_DEFAULT6,
PROP_IP6_CONFIG,
PROP_DHCP6_CONFIG,
PROP_VPN,
PROP_MASTER,
);
enum {
STATE_CHANGED,
LAST_SIGNAL
};
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
static guint signals[LAST_SIGNAL];
enum {
PROPERTY_O_IDX_CONNECTION,
PROPERTY_O_IDX_MASTER,
PROPERTY_O_IDX_IP4_CONFIG,
PROPERTY_O_IDX_IP6_CONFIG,
PROPERTY_O_IDX_DHCP4_CONFIG,
PROPERTY_O_IDX_DHCP6_CONFIG,
_PROPERTY_O_IDX_NUM,
};
typedef struct _NMActiveConnectionPrivate {
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
NMLDBusPropertyO property_o[_PROPERTY_O_IDX_NUM];
NMLDBusPropertyAO devices;
NMRefString *specific_object_path;
char *id;
char *uuid;
char *type;
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
guint32 state;
guint32 state_flags;
bool is_default;
bool is_default6;
bool is_vpn;
guint32 reason;
} NMActiveConnectionPrivate;
G_DEFINE_TYPE (NMActiveConnection, nm_active_connection, NM_TYPE_OBJECT);
#define NM_ACTIVE_CONNECTION_GET_PRIVATE(self) _NM_GET_PRIVATE_PTR(self, NMActiveConnection, NM_IS_ACTIVE_CONNECTION, NMObject)
/*****************************************************************************/
/**
* nm_active_connection_get_connection:
* @connection: a #NMActiveConnection
*
* Gets the #NMRemoteConnection associated with @connection.
*
* Returns: (transfer none): the #NMRemoteConnection which this
* #NMActiveConnection is an active instance of.
**/
NMRemoteConnection *
nm_active_connection_get_connection (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return nml_dbus_property_o_get_obj (&NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->property_o[PROPERTY_O_IDX_CONNECTION]);
}
/**
* nm_active_connection_get_id:
* @connection: a #NMActiveConnection
*
* Gets the #NMConnection's ID.
*
* Returns: the ID of the #NMConnection that backs the #NMActiveConnection.
* This is the internal string used by the connection, and must not be modified.
**/
const char *
nm_active_connection_get_id (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
return _nml_coerce_property_str_not_empty (NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->id);
}
/**
* nm_active_connection_get_uuid:
* @connection: a #NMActiveConnection
*
* Gets the #NMConnection's UUID.
*
* Returns: the UUID of the #NMConnection that backs the #NMActiveConnection.
* This is the internal string used by the connection, and must not be modified.
**/
const char *
nm_active_connection_get_uuid (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
return _nml_coerce_property_str_not_empty (NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->uuid);
}
/**
* nm_active_connection_get_connection_type:
* @connection: a #NMActiveConnection
*
* Gets the #NMConnection's type.
*
* Returns: the type of the #NMConnection that backs the #NMActiveConnection.
* This is the internal string used by the connection, and must not be modified.
**/
const char *
nm_active_connection_get_connection_type (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
return _nml_coerce_property_str_not_empty (NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->type);
}
/**
* nm_active_connection_get_specific_object_path:
* @connection: a #NMActiveConnection
*
* Gets the path of the "specific object" used at activation.
*
* Currently there is no single method that will allow you to automatically turn
* this into an appropriate #NMObject; you need to know what kind of object it
* is based on other information. (Eg, if @connection corresponds to a Wi-Fi
* connection, then the specific object will be an #NMAccessPoint, and you can
* resolve it with nm_device_wifi_get_access_point_by_path().)
*
* Returns: the specific object's D-Bus path. This is the internal string used
* by the connection, and must not be modified.
**/
const char *
nm_active_connection_get_specific_object_path (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return _nml_coerce_property_object_path (NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->specific_object_path);
}
/**
* nm_active_connection_get_devices:
* @connection: a #NMActiveConnection
*
* Gets the #NMDevices used for the active connections.
*
* Returns: (element-type NMDevice): the #GPtrArray containing #NMDevices.
* This is the internal copy used by the connection, and must not be modified.
**/
const GPtrArray *
nm_active_connection_get_devices (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return nml_dbus_property_ao_get_objs_as_ptrarray (&NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->devices);
}
/**
* nm_active_connection_get_state:
* @connection: a #NMActiveConnection
*
* Gets the active connection's state.
*
* Returns: the state
**/
NMActiveConnectionState
nm_active_connection_get_state (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NM_ACTIVE_CONNECTION_STATE_UNKNOWN);
return NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->state;
}
/**
* nm_active_connection_get_state_flags:
* @connection: a #NMActiveConnection
*
* Gets the active connection's state flags.
*
* Returns: the state flags
*
* Since: 1.10
**/
NMActivationStateFlags
nm_active_connection_get_state_flags (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NM_ACTIVATION_STATE_FLAG_NONE);
return NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->state_flags;
}
/**
* nm_active_connection_get_state_reason:
* @connection: a #NMActiveConnection
*
* Gets the reason for active connection's state.
*
* Returns: the reason
*
* Since: 1.8
**/
NMActiveConnectionStateReason
nm_active_connection_get_state_reason (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NM_ACTIVE_CONNECTION_STATE_REASON_UNKNOWN);
return NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->reason;
}
/**
* nm_active_connection_get_default:
* @connection: a #NMActiveConnection
*
* Whether the active connection is the default IPv4 one (that is, is used for
* the default IPv4 route and DNS information).
*
* Returns: %TRUE if the active connection is the default IPv4 connection
**/
gboolean
nm_active_connection_get_default (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), FALSE);
return NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->is_default;
}
/**
* nm_active_connection_get_ip4_config:
* @connection: an #NMActiveConnection
*
* Gets the current IPv4 #NMIPConfig associated with the #NMActiveConnection.
*
* Returns: (transfer none): the IPv4 #NMIPConfig, or %NULL if the connection is
* not in the %NM_ACTIVE_CONNECTION_STATE_ACTIVATED state.
**/
NMIPConfig *
nm_active_connection_get_ip4_config (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return nml_dbus_property_o_get_obj (&NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->property_o[PROPERTY_O_IDX_IP4_CONFIG]);
}
/**
* nm_active_connection_get_dhcp4_config:
* @connection: an #NMActiveConnection
*
* Gets the current IPv4 #NMDhcpConfig (if any) associated with the
* #NMActiveConnection.
*
* Returns: (transfer none): the IPv4 #NMDhcpConfig, or %NULL if the connection
* does not use DHCP, or is not in the %NM_ACTIVE_CONNECTION_STATE_ACTIVATED
* state.
**/
NMDhcpConfig *
nm_active_connection_get_dhcp4_config (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return nml_dbus_property_o_get_obj (&NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->property_o[PROPERTY_O_IDX_DHCP4_CONFIG]);
}
/**
* nm_active_connection_get_default6:
* @connection: a #NMActiveConnection
*
* Whether the active connection is the default IPv6 one (that is, is used for
* the default IPv6 route and DNS information).
*
* Returns: %TRUE if the active connection is the default IPv6 connection
**/
gboolean
nm_active_connection_get_default6 (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), FALSE);
return NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->is_default6;
}
/**
* nm_active_connection_get_ip6_config:
* @connection: an #NMActiveConnection
*
* Gets the current IPv6 #NMIPConfig associated with the #NMActiveConnection.
*
* Returns: (transfer none): the IPv6 #NMIPConfig, or %NULL if the connection is
* not in the %NM_ACTIVE_CONNECTION_STATE_ACTIVATED state.
**/
NMIPConfig *
nm_active_connection_get_ip6_config (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return nml_dbus_property_o_get_obj (&NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->property_o[PROPERTY_O_IDX_IP6_CONFIG]);
}
/**
* nm_active_connection_get_dhcp6_config:
* @connection: an #NMActiveConnection
*
* Gets the current IPv6 #NMDhcpConfig (if any) associated with the
* #NMActiveConnection.
*
* Returns: (transfer none): the IPv6 #NMDhcpConfig, or %NULL if the connection
* does not use DHCPv6, or is not in the %NM_ACTIVE_CONNECTION_STATE_ACTIVATED
* state.
**/
NMDhcpConfig *
nm_active_connection_get_dhcp6_config (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return nml_dbus_property_o_get_obj (&NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->property_o[PROPERTY_O_IDX_DHCP6_CONFIG]);
}
/**
* nm_active_connection_get_vpn:
* @connection: a #NMActiveConnection
*
* Whether the active connection is a VPN connection.
*
* Returns: %TRUE if the active connection is a VPN connection
**/
gboolean
nm_active_connection_get_vpn (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), FALSE);
return NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->is_vpn;
}
/**
* nm_active_connection_get_master:
* @connection: a #NMActiveConnection
*
* Gets the master #NMDevice of the connection.
*
* Returns: (transfer none): the master #NMDevice of the #NMActiveConnection.
**/
NMDevice *
nm_active_connection_get_master (NMActiveConnection *connection)
{
g_return_val_if_fail (NM_IS_ACTIVE_CONNECTION (connection), NULL);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
return nml_dbus_property_o_get_obj (&NM_ACTIVE_CONNECTION_GET_PRIVATE (connection)->property_o[PROPERTY_O_IDX_MASTER]);
}
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
/*****************************************************************************/
static void
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
_notify_event_state_changed (NMClient *client,
NMClientNotifyEventWithPtr *notify_event)
{
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
gs_unref_object NMActiveConnection *self = notify_event->user_data;
NMActiveConnectionPrivate *priv = NM_ACTIVE_CONNECTION_GET_PRIVATE (self);
/* we expose here the value cache in @priv. In practice, this is the same
* value as we received from the signal. In the unexpected case where they
* differ, the cached value of the current instance would still be more correct. */
g_signal_emit (self,
signals[STATE_CHANGED],
0,
(guint) priv->state,
(guint) priv->reason);
}
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
void
_nm_active_connection_state_changed_commit (NMActiveConnection *self,
guint32 state,
guint32 reason)
{
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
NMClient *client;
NMActiveConnectionPrivate *priv = NM_ACTIVE_CONNECTION_GET_PRIVATE (self);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
client = _nm_object_get_client (self);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
if (priv->state != state) {
priv->state = state;
_nm_client_queue_notify_object (client,
self,
obj_properties[PROP_STATE]);
}
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
priv->reason = reason;
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
_nm_client_notify_event_queue_with_ptr (client,
NM_CLIENT_NOTIFY_EVENT_PRIO_GPROP + 1,
_notify_event_state_changed,
g_object_ref (self));
}
libnm: hide NMActiveConnection until NMRemoteConnection is ready Generally, libnm's NMClient cache only wants to expose NMObjects that are fully initalized. Most objects don't require anything special, except NMRemoteConnection which waits for the GetSettings() call to complete. NMObjects reference each other. For example, NMActiveConnection references NMDevice and NMRemoteConnection. There is a desire that an object is only ready, if the objects that it references are ready too. In practice that is not done, because usually every objects references other objects, that means all objects would be declared as non-ready as long as any of them is still initializing. That does not seem desirable. Instead, most objects (except NMRemoteConnection and now NMActiveConnection) are considered ready and visible, once their first notification completes. In case the objects reference any object that is not yet ready, the references is NULL (but the source object is visible already). This is also done this way, to cope with cycles where objects reference each other. In practice, such cycles should not be exposed by NetworkManager. However, libnm should be robust against that. This has the undesired effect that when you call AddAndActivate(), then the NMActiveConnection might already be visible while its NMRemoteConnection isn't. That means, ac.get_connection() will initially return NULL, until the remote connection becomes ready. Also add a special handling that NMActiveConnection waits for their NMRemoteConnection to be ready, before being ready itself. Fixes: ce0e898fb476 ('libnm: refactor caching of D-Bus objects in NMClient')
2020-01-31 00:54:46 +01:00
/*****************************************************************************/
static gboolean
is_ready (NMObject *nmobj)
{
NMActiveConnectionPrivate *priv = NM_ACTIVE_CONNECTION_GET_PRIVATE (nmobj);
/* Usually, we don't want to expose our NMObject instances until they are fully initialized.
* For NMRemoteSetting this means to wait until GetSettings() returns.
*
* Note that most object types reference each other (directly or indirectly). E.g. the
* NMActiveConnection refers to the NMRemoteConnection and the NMDevice instance. So,
* we don't want to hide them too long, otherwise basically the entire set of objects
* will be hidden until they are all initialized. So, usually, when a NMObject references
* objects that are not yet initialized, that reference will just be NULL but the object
* will be considered ready already.
*
* For NMActiveConnection referencing a NMRemoteConnection don't do that. Here we wait for the
* NMRemoteConnection to be ready as well. This is somewhat arbitrary special casing, but
* the effect is that when nm_client_add_and_activate*() returns, the NMActiveConnection already
* references a initialized NMRemoteConnection.
*/
if (!nml_dbus_property_o_is_ready_fully (&priv->property_o[PROPERTY_O_IDX_CONNECTION]))
return FALSE;
return NM_OBJECT_CLASS (nm_active_connection_parent_class)->is_ready (nmobj);
}
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
/*****************************************************************************/
static void
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
nm_active_connection_init (NMActiveConnection *self)
{
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
NMActiveConnectionPrivate *priv;
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
priv = G_TYPE_INSTANCE_GET_PRIVATE (self, NM_TYPE_ACTIVE_CONNECTION, NMActiveConnectionPrivate);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
self->_priv = priv;
}
static void
finalize (GObject *object)
{
NMActiveConnectionPrivate *priv = NM_ACTIVE_CONNECTION_GET_PRIVATE (object);
g_free (priv->id);
g_free (priv->uuid);
g_free (priv->type);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
nm_ref_string_unref (priv->specific_object_path);
G_OBJECT_CLASS (nm_active_connection_parent_class)->finalize (object);
}
static void
get_property (GObject *object,
guint prop_id,
GValue *value,
GParamSpec *pspec)
{
NMActiveConnection *self = NM_ACTIVE_CONNECTION (object);
switch (prop_id) {
case PROP_CONNECTION:
g_value_set_object (value, nm_active_connection_get_connection (self));
break;
case PROP_ID:
g_value_set_string (value, nm_active_connection_get_id (self));
break;
case PROP_UUID:
g_value_set_string (value, nm_active_connection_get_uuid (self));
break;
case PROP_TYPE:
g_value_set_string (value, nm_active_connection_get_connection_type (self));
break;
case PROP_SPECIFIC_OBJECT_PATH:
g_value_set_string (value, nm_active_connection_get_specific_object_path (self));
break;
case PROP_DEVICES:
g_value_take_boxed (value, _nm_utils_copy_object_array (nm_active_connection_get_devices (self)));
break;
case PROP_STATE:
g_value_set_enum (value, nm_active_connection_get_state (self));
break;
case PROP_STATE_FLAGS:
g_value_set_uint (value, nm_active_connection_get_state_flags (self));
break;
case PROP_DEFAULT:
g_value_set_boolean (value, nm_active_connection_get_default (self));
break;
case PROP_IP4_CONFIG:
g_value_set_object (value, nm_active_connection_get_ip4_config (self));
break;
case PROP_DHCP4_CONFIG:
g_value_set_object (value, nm_active_connection_get_dhcp4_config (self));
break;
case PROP_DEFAULT6:
g_value_set_boolean (value, nm_active_connection_get_default6 (self));
break;
case PROP_IP6_CONFIG:
g_value_set_object (value, nm_active_connection_get_ip6_config (self));
break;
case PROP_DHCP6_CONFIG:
g_value_set_object (value, nm_active_connection_get_dhcp6_config (self));
break;
case PROP_VPN:
g_value_set_boolean (value, nm_active_connection_get_vpn (self));
break;
case PROP_MASTER:
g_value_set_object (value, nm_active_connection_get_master (self));
break;
default:
G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec);
break;
}
}
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
const NMLDBusMetaIface _nml_dbus_meta_iface_nm_connection_active = NML_DBUS_META_IFACE_INIT_PROP (
NM_DBUS_INTERFACE_ACTIVE_CONNECTION,
nm_active_connection_get_type,
NML_DBUS_META_INTERFACE_PRIO_INSTANTIATE_LOW,
NML_DBUS_META_IFACE_DBUS_PROPERTIES (
NML_DBUS_META_PROPERTY_INIT_O_PROP ("Connection", PROP_CONNECTION, NMActiveConnectionPrivate, property_o[PROPERTY_O_IDX_CONNECTION], nm_remote_connection_get_type ),
NML_DBUS_META_PROPERTY_INIT_B ("Default", PROP_DEFAULT, NMActiveConnectionPrivate, is_default ),
NML_DBUS_META_PROPERTY_INIT_B ("Default6", PROP_DEFAULT6, NMActiveConnectionPrivate, is_default6 ),
NML_DBUS_META_PROPERTY_INIT_AO_PROP ("Devices", PROP_DEVICES, NMActiveConnectionPrivate, devices, nm_device_get_type ),
NML_DBUS_META_PROPERTY_INIT_O_PROP ("Dhcp4Config", PROP_DHCP4_CONFIG, NMActiveConnectionPrivate, property_o[PROPERTY_O_IDX_DHCP4_CONFIG], nm_dhcp4_config_get_type ),
NML_DBUS_META_PROPERTY_INIT_O_PROP ("Dhcp6Config", PROP_DHCP6_CONFIG, NMActiveConnectionPrivate, property_o[PROPERTY_O_IDX_DHCP6_CONFIG], nm_dhcp6_config_get_type ),
NML_DBUS_META_PROPERTY_INIT_S ("Id", PROP_ID, NMActiveConnectionPrivate, id ),
NML_DBUS_META_PROPERTY_INIT_O_PROP ("Ip4Config", PROP_IP4_CONFIG, NMActiveConnectionPrivate, property_o[PROPERTY_O_IDX_IP4_CONFIG], nm_ip4_config_get_type ),
NML_DBUS_META_PROPERTY_INIT_O_PROP ("Ip6Config", PROP_IP6_CONFIG, NMActiveConnectionPrivate, property_o[PROPERTY_O_IDX_IP6_CONFIG], nm_ip6_config_get_type ),
NML_DBUS_META_PROPERTY_INIT_O_PROP ("Master", PROP_MASTER, NMActiveConnectionPrivate, property_o[PROPERTY_O_IDX_MASTER], nm_device_get_type ),
NML_DBUS_META_PROPERTY_INIT_O ("SpecificObject", PROP_SPECIFIC_OBJECT_PATH, NMActiveConnectionPrivate, specific_object_path ),
NML_DBUS_META_PROPERTY_INIT_U ("State", PROP_STATE, NMActiveConnectionPrivate, state ),
NML_DBUS_META_PROPERTY_INIT_U ("StateFlags", PROP_STATE_FLAGS, NMActiveConnectionPrivate, state_flags ),
NML_DBUS_META_PROPERTY_INIT_S ("Type", PROP_TYPE, NMActiveConnectionPrivate, type ),
NML_DBUS_META_PROPERTY_INIT_S ("Uuid", PROP_UUID, NMActiveConnectionPrivate, uuid ),
NML_DBUS_META_PROPERTY_INIT_B ("Vpn", PROP_VPN, NMActiveConnectionPrivate, is_vpn ),
),
.base_struct_offset = G_STRUCT_OFFSET (NMActiveConnection, _priv),
);
static void
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
nm_active_connection_class_init (NMActiveConnectionClass *klass)
{
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
GObjectClass *object_class = G_OBJECT_CLASS (klass);
NMObjectClass *nm_object_class = NM_OBJECT_CLASS (klass);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
g_type_class_add_private (klass, sizeof (NMActiveConnectionPrivate));
object_class->get_property = get_property;
object_class->finalize = finalize;
libnm: hide NMActiveConnection until NMRemoteConnection is ready Generally, libnm's NMClient cache only wants to expose NMObjects that are fully initalized. Most objects don't require anything special, except NMRemoteConnection which waits for the GetSettings() call to complete. NMObjects reference each other. For example, NMActiveConnection references NMDevice and NMRemoteConnection. There is a desire that an object is only ready, if the objects that it references are ready too. In practice that is not done, because usually every objects references other objects, that means all objects would be declared as non-ready as long as any of them is still initializing. That does not seem desirable. Instead, most objects (except NMRemoteConnection and now NMActiveConnection) are considered ready and visible, once their first notification completes. In case the objects reference any object that is not yet ready, the references is NULL (but the source object is visible already). This is also done this way, to cope with cycles where objects reference each other. In practice, such cycles should not be exposed by NetworkManager. However, libnm should be robust against that. This has the undesired effect that when you call AddAndActivate(), then the NMActiveConnection might already be visible while its NMRemoteConnection isn't. That means, ac.get_connection() will initially return NULL, until the remote connection becomes ready. Also add a special handling that NMActiveConnection waits for their NMRemoteConnection to be ready, before being ready itself. Fixes: ce0e898fb476 ('libnm: refactor caching of D-Bus objects in NMClient')
2020-01-31 00:54:46 +01:00
nm_object_class->is_ready = is_ready;
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
_NM_OBJECT_CLASS_INIT_PRIV_PTR_INDIRECT (nm_object_class, NMActiveConnection);
_NM_OBJECT_CLASS_INIT_PROPERTY_O_FIELDS_N (nm_object_class, NMActiveConnectionPrivate, property_o);
_NM_OBJECT_CLASS_INIT_PROPERTY_AO_FIELDS_1 (nm_object_class, NMActiveConnectionPrivate, devices);
/**
* NMActiveConnection:connection:
*
* The connection that this is an active instance of.
**/
obj_properties[PROP_CONNECTION] =
g_param_spec_object (NM_ACTIVE_CONNECTION_CONNECTION, "", "",
NM_TYPE_REMOTE_CONNECTION,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:id:
*
* The active connection's ID
**/
obj_properties[PROP_ID] =
g_param_spec_string (NM_ACTIVE_CONNECTION_ID, "", "",
NULL,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:uuid:
*
* The active connection's UUID
**/
obj_properties[PROP_UUID] =
g_param_spec_string (NM_ACTIVE_CONNECTION_UUID, "", "",
NULL,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:type:
*
* The active connection's type
**/
obj_properties[PROP_TYPE] =
g_param_spec_string (NM_ACTIVE_CONNECTION_TYPE, "", "",
NULL,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:specific-object-path:
*
* The path to the "specific object" of the active connection; see
* nm_active_connection_get_specific_object_path() for more details.
**/
obj_properties[PROP_SPECIFIC_OBJECT_PATH] =
g_param_spec_string (NM_ACTIVE_CONNECTION_SPECIFIC_OBJECT_PATH, "", "",
NULL,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:devices: (type GPtrArray(NMDevice))
*
* The devices of the active connection.
**/
obj_properties[PROP_DEVICES] =
g_param_spec_boxed (NM_ACTIVE_CONNECTION_DEVICES, "", "",
G_TYPE_PTR_ARRAY,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:state:
*
* The state of the active connection.
**/
obj_properties[PROP_STATE] =
g_param_spec_enum (NM_ACTIVE_CONNECTION_STATE, "", "",
NM_TYPE_ACTIVE_CONNECTION_STATE,
NM_ACTIVE_CONNECTION_STATE_UNKNOWN,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:state-flags:
*
* The state flags of the active connection.
*
* Since: 1.10
**/
obj_properties[PROP_STATE_FLAGS] =
g_param_spec_uint (NM_ACTIVE_CONNECTION_STATE_FLAGS, "", "",
0, G_MAXUINT32,
NM_ACTIVATION_STATE_FLAG_NONE,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:default:
*
* Whether the active connection is the default IPv4 one.
**/
obj_properties[PROP_DEFAULT] =
g_param_spec_boolean (NM_ACTIVE_CONNECTION_DEFAULT, "", "",
FALSE,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:ip4-config:
*
* The IPv4 #NMIPConfig of the connection.
**/
obj_properties[PROP_IP4_CONFIG] =
g_param_spec_object (NM_ACTIVE_CONNECTION_IP4_CONFIG, "", "",
NM_TYPE_IP_CONFIG,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:dhcp4-config:
*
* The IPv4 #NMDhcpConfig of the connection.
**/
obj_properties[PROP_DHCP4_CONFIG] =
g_param_spec_object (NM_ACTIVE_CONNECTION_DHCP4_CONFIG, "", "",
NM_TYPE_DHCP_CONFIG,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:default6:
*
* Whether the active connection is the default IPv6 one.
**/
obj_properties[PROP_DEFAULT6] =
g_param_spec_boolean (NM_ACTIVE_CONNECTION_DEFAULT6, "", "",
FALSE,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:ip6-config:
*
* The IPv6 #NMIPConfig of the connection.
**/
obj_properties[PROP_IP6_CONFIG] =
g_param_spec_object (NM_ACTIVE_CONNECTION_IP6_CONFIG, "", "",
NM_TYPE_IP_CONFIG,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:dhcp6-config:
*
* The IPv6 #NMDhcpConfig of the connection.
**/
obj_properties[PROP_DHCP6_CONFIG] =
g_param_spec_object (NM_ACTIVE_CONNECTION_DHCP6_CONFIG, "", "",
NM_TYPE_DHCP_CONFIG,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:vpn:
*
* Whether the active connection is a VPN connection.
**/
obj_properties[PROP_VPN] =
g_param_spec_boolean (NM_ACTIVE_CONNECTION_VPN, "", "",
FALSE,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
/**
* NMActiveConnection:master:
*
* The master device if one exists.
**/
obj_properties[PROP_MASTER] =
g_param_spec_object (NM_ACTIVE_CONNECTION_MASTER, "", "",
NM_TYPE_DEVICE,
G_PARAM_READABLE |
G_PARAM_STATIC_STRINGS);
libnm: refactor caching of D-Bus objects in NMClient No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
_nml_dbus_meta_class_init_with_properties (object_class, &_nml_dbus_meta_iface_nm_connection_active);
/* TODO: the state reason should also be exposed as a property in libnm's NMActiveConnection,
* like done for NMDevice's state reason. */
/* TODO: the D-Bus API should also expose the state-reason as a property instead of
* a "StateChanged" signal. Like done for Device's "StateReason". */
/**
* NMActiveConnection::state-changed:
* @active_connection: the source #NMActiveConnection
* @state: the new state number (#NMActiveConnectionState)
* @reason: the state change reason (#NMActiveConnectionStateReason)
*/
signals[STATE_CHANGED] =
g_signal_new ("state-changed",
G_OBJECT_CLASS_TYPE (object_class),
G_SIGNAL_RUN_FIRST,
0, NULL, NULL, NULL,
G_TYPE_NONE, 2,
G_TYPE_UINT, G_TYPE_UINT);
}