2019-09-10 11:19:01 +02:00
|
|
|
// SPDX-License-Identifier: LGPL-2.1+
|
2017-05-19 10:19:25 +02:00
|
|
|
/*
|
2019-10-01 09:20:35 +02:00
|
|
|
* Copyright (C) 2017, 2018 Red Hat, Inc.
|
2017-05-19 10:19:25 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#ifndef __NM_LIBNM_UTILS_H__
|
|
|
|
|
#define __NM_LIBNM_UTILS_H__
|
|
|
|
|
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
#include "c-list/src/c-list.h"
|
2019-10-22 16:51:09 +02:00
|
|
|
#include "nm-glib-aux/nm-ref-string.h"
|
2019-12-16 14:05:47 +01:00
|
|
|
#include "nm-glib-aux/nm-logging-fwd.h"
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
#include "nm-types.h"
|
|
|
|
|
#include "nm-object.h"
|
2019-10-29 16:55:15 +01:00
|
|
|
#include "nm-client.h"
|
2019-10-18 08:20:59 +02:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
char *nm_utils_fixup_vendor_string(const char *desc);
|
|
|
|
|
char *nm_utils_fixup_product_string(const char *desc);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
char *nm_utils_wincaps_to_dash(const char *caps);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gboolean nm_utils_g_param_spec_is_default(const GParamSpec *pspec);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
typedef enum {
|
2020-09-28 16:03:33 +02:00
|
|
|
_NML_DBUS_LOG_LEVEL_NONE = 0x00,
|
2020-01-02 17:50:28 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
_NML_DBUS_LOG_LEVEL_INITIALIZED = 0x01,
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
_NML_DBUS_LOG_LEVEL_TRACE = 0x02,
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
_NML_DBUS_LOG_LEVEL_DEBUG = 0x04,
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* the difference between a warning and a critical is that it results in
|
2020-09-28 14:50:01 +02:00
|
|
|
* g_warning() vs. g_critical() messages. Note that we want to use "warnings"
|
|
|
|
|
* for unknown D-Bus API that could just result because we run against a
|
|
|
|
|
* newer NetworkManager version (such warnings are more graceful, because
|
|
|
|
|
* we want that libnm can be forward compatible against newer servers).
|
|
|
|
|
* Critical warnings should be emitted when NetworkManager exposes something
|
|
|
|
|
* on D-Bus that breaks the current expectations. Usually NetworkManager
|
|
|
|
|
* should not break API, hence such issues are more severe. */
|
2020-09-28 16:03:33 +02:00
|
|
|
_NML_DBUS_LOG_LEVEL_WARN = 0x08,
|
|
|
|
|
_NML_DBUS_LOG_LEVEL_ERROR = 0x10,
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* ANY is only relevant for nml_dbus_log_enabled() to check whether any of the
|
2020-09-28 14:50:01 +02:00
|
|
|
* options is on. */
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_DBUS_LOG_LEVEL_ANY = _NML_DBUS_LOG_LEVEL_INITIALIZED,
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_DBUS_LOG_LEVEL_TRACE = _NML_DBUS_LOG_LEVEL_TRACE,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_DEBUG = _NML_DBUS_LOG_LEVEL_DEBUG | NML_DBUS_LOG_LEVEL_TRACE,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_WARN = _NML_DBUS_LOG_LEVEL_WARN | NML_DBUS_LOG_LEVEL_DEBUG,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_ERROR = _NML_DBUS_LOG_LEVEL_ERROR | NML_DBUS_LOG_LEVEL_WARN,
|
2020-07-15 11:29:43 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_DBUS_LOG_STDOUT = 0x20,
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
} NMLDBusLogLevel;
|
|
|
|
|
|
2019-12-16 14:05:47 +01:00
|
|
|
#undef _LOGL_TRACE
|
|
|
|
|
#undef _LOGL_DEBUG
|
|
|
|
|
#undef _LOGL_INFO
|
|
|
|
|
#undef _LOGL_WARN
|
|
|
|
|
#undef _LOGL_ERR
|
|
|
|
|
|
|
|
|
|
#define _LOGL_TRACE NML_DBUS_LOG_LEVEL_TRACE
|
|
|
|
|
#define _LOGL_DEBUG NML_DBUS_LOG_LEVEL_DEBUG
|
|
|
|
|
#define _LOGL_INFO NML_DBUS_LOG_LEVEL_INFO
|
|
|
|
|
#define _LOGL_WARN NML_DBUS_LOG_LEVEL_WARN
|
|
|
|
|
#define _LOGL_ERR NML_DBUS_LOG_LEVEL_ERR
|
|
|
|
|
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
extern volatile int _nml_dbus_log_level;
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
int _nml_dbus_log_level_init(void);
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
|
|
|
|
static inline gboolean
|
2020-09-28 16:03:33 +02:00
|
|
|
nml_dbus_log_enabled_full(NMLDBusLogLevel level, gboolean *out_use_stdout)
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
{
|
2020-09-28 16:03:33 +02:00
|
|
|
int l;
|
|
|
|
|
|
|
|
|
|
nm_assert(NM_IN_SET(level,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_ANY,
|
|
|
|
|
_NML_DBUS_LOG_LEVEL_NONE,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_TRACE,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_DEBUG,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_WARN,
|
|
|
|
|
NML_DBUS_LOG_LEVEL_ERROR));
|
|
|
|
|
|
|
|
|
|
l = g_atomic_int_get(&_nml_dbus_log_level);
|
|
|
|
|
if (G_UNLIKELY(l == 0))
|
|
|
|
|
l = _nml_dbus_log_level_init();
|
|
|
|
|
|
|
|
|
|
nm_assert(l & _NML_DBUS_LOG_LEVEL_INITIALIZED);
|
|
|
|
|
NM_SET_OUT(out_use_stdout, NM_FLAGS_HAS(l, NML_DBUS_LOG_STDOUT));
|
|
|
|
|
if (level == NML_DBUS_LOG_LEVEL_ANY)
|
|
|
|
|
return l != _NML_DBUS_LOG_LEVEL_INITIALIZED;
|
|
|
|
|
return !!(((NMLDBusLogLevel) l) & level);
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
}
|
|
|
|
|
|
2020-07-15 11:29:43 +02:00
|
|
|
static inline gboolean
|
2020-09-28 16:03:33 +02:00
|
|
|
nml_dbus_log_enabled(NMLDBusLogLevel level)
|
2020-07-15 11:29:43 +02:00
|
|
|
{
|
2020-09-28 16:03:33 +02:00
|
|
|
return nml_dbus_log_enabled_full(level, NULL);
|
2020-07-15 11:29:43 +02:00
|
|
|
}
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nml_dbus_log(NMLDBusLogLevel level, gboolean use_stdout, const char *fmt, ...)
|
|
|
|
|
_nm_printf(3, 4);
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_LOG(level, ...) \
|
|
|
|
|
G_STMT_START \
|
|
|
|
|
{ \
|
|
|
|
|
gboolean _use_stdout; \
|
|
|
|
|
\
|
|
|
|
|
G_STATIC_ASSERT((level) == _NML_DBUS_LOG_LEVEL_NONE || (level) == NML_DBUS_LOG_LEVEL_TRACE \
|
|
|
|
|
|| (level) == NML_DBUS_LOG_LEVEL_DEBUG \
|
|
|
|
|
|| (level) == NML_DBUS_LOG_LEVEL_WARN \
|
|
|
|
|
|| (level) == NML_DBUS_LOG_LEVEL_ERROR); \
|
|
|
|
|
\
|
|
|
|
|
if (nml_dbus_log_enabled_full(level, &_use_stdout)) { \
|
|
|
|
|
_nml_dbus_log((level), _use_stdout, __VA_ARGS__); \
|
|
|
|
|
} \
|
|
|
|
|
} \
|
|
|
|
|
G_STMT_END
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_LOG_T(...) NML_DBUS_LOG(NML_DBUS_LOG_LEVEL_TRACE, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_LOG_D(...) NML_DBUS_LOG(NML_DBUS_LOG_LEVEL_DEBUG, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_LOG_W(...) NML_DBUS_LOG(NML_DBUS_LOG_LEVEL_WARN, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_LOG_E(...) NML_DBUS_LOG(NML_DBUS_LOG_LEVEL_ERROR, __VA_ARGS__)
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
2020-01-02 17:50:28 +01:00
|
|
|
/* _NML_NMCLIENT_LOG_LEVEL_COERCE is only for printf debugging. You can disable client logging by
|
|
|
|
|
* mapping the requested log level to a different one (or disable it altogether).
|
|
|
|
|
* That's useful for example if you are interested in *other* trace logging messages from
|
|
|
|
|
* libnm and don't want to get flooded by NMClient's trace messages. */
|
2020-09-28 16:03:33 +02:00
|
|
|
#define _NML_NMCLIENT_LOG_LEVEL_COERCE(level) \
|
|
|
|
|
/* for example, change condition below to suppress <trace> messages from NMClient. */ \
|
|
|
|
|
((TRUE || ((level) != NML_DBUS_LOG_LEVEL_TRACE)) ? (level) : _NML_DBUS_LOG_LEVEL_NONE)
|
|
|
|
|
|
|
|
|
|
#define NML_NMCLIENT_LOG(level, self, ...) \
|
|
|
|
|
NML_DBUS_LOG(_NML_NMCLIENT_LOG_LEVEL_COERCE(level), \
|
|
|
|
|
"nmclient[" NM_HASH_OBFUSCATE_PTR_FMT "]: " _NM_UTILS_MACRO_FIRST(__VA_ARGS__), \
|
|
|
|
|
NM_HASH_OBFUSCATE_PTR(self) _NM_UTILS_MACRO_REST(__VA_ARGS__))
|
|
|
|
|
|
|
|
|
|
#define NML_NMCLIENT_LOG_T(self, ...) NML_NMCLIENT_LOG(NML_DBUS_LOG_LEVEL_TRACE, self, __VA_ARGS__)
|
|
|
|
|
#define NML_NMCLIENT_LOG_D(self, ...) NML_NMCLIENT_LOG(NML_DBUS_LOG_LEVEL_DEBUG, self, __VA_ARGS__)
|
|
|
|
|
#define NML_NMCLIENT_LOG_W(self, ...) NML_NMCLIENT_LOG(NML_DBUS_LOG_LEVEL_WARN, self, __VA_ARGS__)
|
|
|
|
|
#define NML_NMCLIENT_LOG_E(self, ...) NML_NMCLIENT_LOG(NML_DBUS_LOG_LEVEL_ERROR, self, __VA_ARGS__)
|
libnm: add logging NML_DBUS_LOG*() for debugging D-Bus for NMClient
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
2019-10-14 08:06:33 +02:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2019-10-22 16:51:09 +02:00
|
|
|
static inline const char *
|
2020-09-28 16:03:33 +02:00
|
|
|
_nml_coerce_property_str_not_null(const char *str)
|
2019-10-22 16:51:09 +02:00
|
|
|
{
|
2020-09-28 16:03:33 +02:00
|
|
|
return str ?: "";
|
2019-10-22 16:51:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static inline const char *
|
2020-09-28 16:03:33 +02:00
|
|
|
_nml_coerce_property_str_not_empty(const char *str)
|
2019-10-22 16:51:09 +02:00
|
|
|
{
|
2020-09-28 16:03:33 +02:00
|
|
|
return str && str[0] ? str : NULL;
|
2019-10-22 16:51:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static inline const char *
|
2020-09-28 16:03:33 +02:00
|
|
|
_nml_coerce_property_object_path(NMRefString *path)
|
2019-10-22 16:51:09 +02:00
|
|
|
{
|
2020-09-28 16:03:33 +02:00
|
|
|
if (!path)
|
|
|
|
|
return NULL;
|
|
|
|
|
return nm_dbus_path_not_empty(path->str);
|
2019-10-22 16:51:09 +02:00
|
|
|
}
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
static inline const char *const *
|
|
|
|
|
_nml_coerce_property_strv_not_null(char **strv)
|
2019-10-22 16:51:09 +02:00
|
|
|
{
|
2020-09-28 16:03:33 +02:00
|
|
|
return ((const char *const *) strv) ?: NM_PTRARRAY_EMPTY(const char *);
|
2019-10-22 16:51:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
GQuark nm_context_busy_watcher_quark(void);
|
2019-12-17 13:59:15 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nm_context_busy_watcher_integrate_source(GMainContext *outer_context,
|
|
|
|
|
GMainContext *inner_context,
|
|
|
|
|
GObject * context_busy_watcher);
|
2019-12-17 13:59:15 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2019-12-17 14:38:29 +01:00
|
|
|
typedef struct {
|
2020-09-28 16:03:33 +02:00
|
|
|
GCancellable *cancellable;
|
|
|
|
|
GSource * cancel_on_idle_source;
|
|
|
|
|
gulong cancelled_id;
|
|
|
|
|
union {
|
|
|
|
|
struct {
|
|
|
|
|
GTask *task;
|
|
|
|
|
} async;
|
|
|
|
|
struct {
|
|
|
|
|
GMainLoop *main_loop;
|
|
|
|
|
GError ** error_location;
|
|
|
|
|
} sync;
|
|
|
|
|
} data;
|
|
|
|
|
bool is_sync : 1;
|
2019-12-17 14:38:29 +01:00
|
|
|
} NMLInitData;
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLInitData *
|
|
|
|
|
nml_init_data_new_sync(GCancellable *cancellable, GMainLoop *main_loop, GError **error_location);
|
2019-12-17 14:38:29 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLInitData *nml_init_data_new_async(GCancellable *cancellable, GTask *task_take);
|
2019-12-17 14:38:29 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_init_data_return(NMLInitData *init_data, GError *error_take);
|
2019-12-18 17:12:45 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_cleanup_context_busy_watcher_on_idle(GObject * context_busy_watcher_take,
|
|
|
|
|
GMainContext *context);
|
2020-01-20 11:09:32 +01:00
|
|
|
|
2019-12-17 14:38:29 +01:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
typedef struct _NMLDBusObject NMLDBusObject;
|
|
|
|
|
typedef struct _NMLDBusObjWatcher NMLDBusObjWatcher;
|
|
|
|
|
typedef struct _NMLDBusMetaIface NMLDBusMetaIface;
|
|
|
|
|
typedef struct _NMLDBusPropertyO NMLDBusPropertyO;
|
|
|
|
|
typedef struct _NMLDBusPropertyAO NMLDBusPropertyAO;
|
|
|
|
|
|
|
|
|
|
typedef enum {
|
2020-09-28 16:03:33 +02:00
|
|
|
/* See comments below for NMLDBusMetaIface.interface_prio.
|
2020-09-28 14:50:01 +02:00
|
|
|
*
|
|
|
|
|
* Higher numbers means more important to detect the GObject type. */
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_DBUS_META_INTERFACE_PRIO_NONE = 0,
|
|
|
|
|
NML_DBUS_META_INTERFACE_PRIO_NMCLIENT = 1,
|
|
|
|
|
NML_DBUS_META_INTERFACE_PRIO_PARENT_TYPE = 2,
|
|
|
|
|
NML_DBUS_META_INTERFACE_PRIO_INSTANTIATE_LOW = 3,
|
|
|
|
|
NML_DBUS_META_INTERFACE_PRIO_INSTANTIATE_HIGH = 4,
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
} NMLDBusMetaInteracePrio;
|
2019-10-15 12:53:53 +02:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2019-12-05 15:53:51 +01:00
|
|
|
#define NM_CLIENT_INSTANCE_FLAGS_ALL ((NMClientInstanceFlags) 0x1)
|
2019-12-05 18:17:43 +01:00
|
|
|
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
typedef struct {
|
2020-09-28 16:03:33 +02:00
|
|
|
GType (*get_o_type_fcn)(void);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* Ignore whether the referenced NMObject is ready or not. That means,
|
2020-09-28 14:50:01 +02:00
|
|
|
* the property is always ready (and if the pointed object itself is
|
|
|
|
|
* not yet ready, the property pretends to be %NULL for the moment. */
|
2020-09-28 16:03:33 +02:00
|
|
|
bool is_always_ready : 1;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
} NMLDBusPropertVTableO;
|
|
|
|
|
|
|
|
|
|
struct _NMLDBusPropertyO {
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusObject * owner_dbobj;
|
|
|
|
|
NMLDBusObjWatcher * obj_watcher;
|
|
|
|
|
GObject * nmobj;
|
|
|
|
|
const NMLDBusMetaIface *meta_iface;
|
|
|
|
|
guint dbus_property_idx;
|
|
|
|
|
bool is_ready : 1;
|
|
|
|
|
bool is_changed : 1;
|
|
|
|
|
bool block_is_changed : 1;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
};
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gpointer nml_dbus_property_o_get_obj(NMLDBusPropertyO *pr_o);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gboolean nml_dbus_property_o_is_ready(const NMLDBusPropertyO *pr_o);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gboolean nml_dbus_property_o_is_ready_fully(const NMLDBusPropertyO *pr_o);
|
libnm: hide NMActiveConnection until NMRemoteConnection is ready
Generally, libnm's NMClient cache only wants to expose NMObjects that
are fully initalized. Most objects don't require anything special,
except NMRemoteConnection which waits for the GetSettings() call to complete.
NMObjects reference each other. For example, NMActiveConnection
references NMDevice and NMRemoteConnection. There is a desire that an
object is only ready, if the objects that it references are ready too.
In practice that is not done, because usually every objects references
other objects, that means all objects would be declared as non-ready
as long as any of them is still initializing. That does not seem
desirable. Instead, most objects (except NMRemoteConnection and now
NMActiveConnection) are considered ready and visible, once their first
notification completes. In case the objects reference any object that is
not yet ready, the references is NULL (but the source object is visible
already). This is also done this way, to cope with cycles where
objects reference each other. In practice, such cycles should not be
exposed by NetworkManager. However, libnm should be robust against that.
This has the undesired effect that when you call AddAndActivate(), then
the NMActiveConnection might already be visible while its
NMRemoteConnection isn't. That means, ac.get_connection() will
initially return NULL, until the remote connection becomes ready.
Also add a special handling that NMActiveConnection waits for their
NMRemoteConnection to be ready, before being ready itself.
Fixes: ce0e898fb476 ('libnm: refactor caching of D-Bus objects in NMClient')
2020-01-31 00:54:46 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_dbus_property_o_clear(NMLDBusPropertyO *pr_o, NMClient *client);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_dbus_property_o_clear_many(NMLDBusPropertyO *pr_o, guint len, NMClient *self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_dbus_property_o_notify_changed_many(NMLDBusPropertyO *ptr, guint len, NMClient *self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
typedef struct {
|
2020-09-28 16:03:33 +02:00
|
|
|
GType (*get_o_type_fcn)(void);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void (*notify_changed_ao)(NMLDBusPropertyAO *pr_ao,
|
|
|
|
|
NMClient * self,
|
|
|
|
|
NMObject * nmobj,
|
|
|
|
|
gboolean is_added /* or else removed */);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gboolean (*check_nmobj_visible_fcn)(GObject *nmobj);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* Ignore whether the referenced NMObject is ready or not. That means,
|
2020-09-28 14:50:01 +02:00
|
|
|
* the property is always ready (and if the pointed object itself is
|
|
|
|
|
* not yet ready, the property pretends to be %NULL for the moment. */
|
2020-09-28 16:03:33 +02:00
|
|
|
bool is_always_ready : 1;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
} NMLDBusPropertVTableAO;
|
|
|
|
|
|
|
|
|
|
struct _NMLDBusPropertyAOData;
|
|
|
|
|
|
|
|
|
|
struct _NMLDBusPropertyAO {
|
2020-09-28 16:03:33 +02:00
|
|
|
CList data_lst_head;
|
|
|
|
|
GHashTable * hash;
|
|
|
|
|
NMLDBusObject * owner_dbobj;
|
|
|
|
|
const NMLDBusMetaIface * meta_iface;
|
|
|
|
|
GPtrArray * arr;
|
|
|
|
|
struct _NMLDBusPropertyAOData *changed_head;
|
|
|
|
|
guint dbus_property_idx;
|
|
|
|
|
guint n_not_ready;
|
|
|
|
|
bool is_changed : 1;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
};
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
const GPtrArray *nml_dbus_property_ao_get_objs_as_ptrarray(NMLDBusPropertyAO *pr_ao);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gboolean nml_dbus_property_ao_is_ready(const NMLDBusPropertyAO *pr_ao);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_dbus_property_ao_clear(NMLDBusPropertyAO *pr_ao, NMClient *client);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_dbus_property_ao_clear_many(NMLDBusPropertyAO *pr_ao, guint len, NMClient *self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_dbus_property_ao_notify_changed_many(NMLDBusPropertyAO *ptr, guint len, NMClient *self);
|
2017-05-19 10:19:25 +02:00
|
|
|
|
2019-10-18 08:12:01 +02:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
typedef enum {
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_DBUS_NOTIFY_UPDATE_PROP_FLAGS_NONE = 0,
|
|
|
|
|
NML_DBUS_NOTIFY_UPDATE_PROP_FLAGS_NOTIFY = 0x1,
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
} NMLDBusNotifyUpdatePropFlags;
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusNotifyUpdatePropFlags _nml_dbus_notify_update_prop_ignore(NMClient * client,
|
|
|
|
|
NMLDBusObject * dbobj,
|
|
|
|
|
const NMLDBusMetaIface *meta_iface,
|
|
|
|
|
guint dbus_property_idx,
|
|
|
|
|
GVariant *value);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusNotifyUpdatePropFlags _nml_dbus_notify_update_prop_o(NMClient * client,
|
|
|
|
|
NMLDBusObject * dbobj,
|
|
|
|
|
const NMLDBusMetaIface *meta_iface,
|
|
|
|
|
guint dbus_property_idx,
|
|
|
|
|
GVariant *value);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
typedef struct {
|
2020-09-28 16:03:33 +02:00
|
|
|
const char * dbus_property_name;
|
|
|
|
|
const GVariantType *dbus_type;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
guint16 prop_struct_offset;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
guint8 obj_properties_idx;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
bool use_notify_update_prop : 1;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
bool obj_property_no_reverse_idx : 1;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
union {
|
|
|
|
|
union {
|
|
|
|
|
const NMLDBusPropertVTableO * property_vtable_o;
|
|
|
|
|
const NMLDBusPropertVTableAO *property_vtable_ao;
|
|
|
|
|
} extra;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusNotifyUpdatePropFlags (*notify_update_prop)(NMClient * client,
|
|
|
|
|
NMLDBusObject * dbobj,
|
|
|
|
|
const NMLDBusMetaIface *meta_iface,
|
|
|
|
|
guint dbus_property_idx,
|
|
|
|
|
GVariant *value);
|
|
|
|
|
};
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
} NMLDBusMetaProperty;
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
#define NML_DBUS_META_PROPERTY_INIT(v_dbus_property_name, v_dbus_type, v_obj_properties_idx, ...) \
|
|
|
|
|
{ \
|
|
|
|
|
.dbus_property_name = "" v_dbus_property_name "", \
|
|
|
|
|
.dbus_type = NM_G_VARIANT_TYPE("" v_dbus_type ""), \
|
|
|
|
|
.obj_properties_idx = v_obj_properties_idx, ##__VA_ARGS__ \
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#define _NML_DBUS_META_PROPERTY_INIT_DEFAULT(v_dbus_type, \
|
|
|
|
|
v_exp_type, \
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
v_dbus_property_name, \
|
|
|
|
|
v_obj_properties_idx, \
|
2020-09-28 16:03:33 +02:00
|
|
|
v_container, \
|
|
|
|
|
v_field) \
|
|
|
|
|
NML_DBUS_META_PROPERTY_INIT( \
|
|
|
|
|
v_dbus_property_name, \
|
|
|
|
|
v_dbus_type, \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
.prop_struct_offset = NM_STRUCT_OFFSET_ENSURE_TYPE(v_exp_type, v_container, v_field))
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_B(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("b", bool, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_Y(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("y", guint8, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_Q(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("q", guint16, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_I(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("i", gint32, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_U(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("u", guint32, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_X(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("x", gint64, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_T(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("t", guint64, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_S(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("s", char *, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_AS(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("as", char **, __VA_ARGS__)
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_AY(...) \
|
|
|
|
|
_NML_DBUS_META_PROPERTY_INIT_DEFAULT("ay", GBytes *, __VA_ARGS__)
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_O(v_dbus_property_name, \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
v_container, \
|
|
|
|
|
v_field) \
|
|
|
|
|
NML_DBUS_META_PROPERTY_INIT( \
|
|
|
|
|
v_dbus_property_name, \
|
|
|
|
|
"o", \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
.prop_struct_offset = NM_STRUCT_OFFSET_ENSURE_TYPE(NMRefString *, v_container, v_field), \
|
|
|
|
|
.use_notify_update_prop = TRUE, \
|
|
|
|
|
.notify_update_prop = _nml_dbus_notify_update_prop_o)
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_O_PROP(v_dbus_property_name, \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
v_container, \
|
|
|
|
|
v_field, \
|
|
|
|
|
v_get_o_type_fcn, \
|
|
|
|
|
...) \
|
|
|
|
|
NML_DBUS_META_PROPERTY_INIT( \
|
|
|
|
|
v_dbus_property_name, \
|
|
|
|
|
"o", \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
.prop_struct_offset = \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(NMLDBusPropertyO, v_container, v_field), \
|
|
|
|
|
.extra.property_vtable_o = \
|
|
|
|
|
&((const NMLDBusPropertVTableO){.get_o_type_fcn = (v_get_o_type_fcn), ##__VA_ARGS__}))
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_AO_PROP(v_dbus_property_name, \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
v_container, \
|
|
|
|
|
v_field, \
|
|
|
|
|
v_get_o_type_fcn, \
|
|
|
|
|
...) \
|
|
|
|
|
NML_DBUS_META_PROPERTY_INIT( \
|
|
|
|
|
v_dbus_property_name, \
|
|
|
|
|
"ao", \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
.prop_struct_offset = \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(NMLDBusPropertyAO, v_container, v_field), \
|
|
|
|
|
.extra.property_vtable_ao = &( \
|
|
|
|
|
(const NMLDBusPropertVTableAO){.get_o_type_fcn = (v_get_o_type_fcn), ##__VA_ARGS__}))
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_FCN(v_dbus_property_name, \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
v_dbus_type, \
|
|
|
|
|
v_notify_update_prop, \
|
|
|
|
|
...) \
|
|
|
|
|
NML_DBUS_META_PROPERTY_INIT(v_dbus_property_name, \
|
|
|
|
|
v_dbus_type, \
|
|
|
|
|
v_obj_properties_idx, \
|
|
|
|
|
.use_notify_update_prop = TRUE, \
|
|
|
|
|
.notify_update_prop = (v_notify_update_prop), \
|
|
|
|
|
##__VA_ARGS__)
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_IGNORE(v_dbus_property_name, v_dbus_type) \
|
|
|
|
|
NML_DBUS_META_PROPERTY_INIT(v_dbus_property_name, \
|
|
|
|
|
v_dbus_type, \
|
|
|
|
|
0, \
|
|
|
|
|
.use_notify_update_prop = TRUE, \
|
|
|
|
|
.notify_update_prop = _nml_dbus_notify_update_prop_ignore)
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/* "TODO" is like "IGNORE". The difference is that we don't plan to ever implement "IGNORE", but
|
|
|
|
|
* "TODO" is something we should add support for. */
|
2020-09-28 16:03:33 +02:00
|
|
|
#define NML_DBUS_META_PROPERTY_INIT_TODO(...) NML_DBUS_META_PROPERTY_INIT_IGNORE(__VA_ARGS__)
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
struct _NMLDBusMetaIface {
|
2020-09-28 16:03:33 +02:00
|
|
|
const char *dbus_iface_name;
|
|
|
|
|
GType (*get_type_fcn)(void);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* Usually there is a one-to-one correspondence between the properties
|
2020-09-28 14:50:01 +02:00
|
|
|
* on D-Bus (dbus_properties) and the GObject properties (obj_properties).
|
|
|
|
|
*
|
|
|
|
|
* With:
|
|
|
|
|
* meta_iface->obj_properties[o_idx] (o_idx < n_obj_properties)
|
|
|
|
|
* &meta_iface->dbus_properties[d_idx] (d_idx < n_dbus_properties)
|
|
|
|
|
* it follows that
|
|
|
|
|
* assert (meta_iface->obj_properties_reverse_idx[o_idx] == d_idx)
|
|
|
|
|
* assert (meta_iface->dbus_properties[d_idx].obj_properties_idx == o_idx)
|
|
|
|
|
* if (and only if) two properties correspond.
|
|
|
|
|
*/
|
2020-09-28 16:03:33 +02:00
|
|
|
const GParamSpec *const * obj_properties;
|
|
|
|
|
const NMLDBusMetaProperty *dbus_properties;
|
|
|
|
|
const guint8 * obj_properties_reverse_idx;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
guint8 n_dbus_properties;
|
|
|
|
|
guint8 n_obj_properties;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* The offsets "prop_struct_offset" in NMLDBusMetaProperty are based on some base
|
2020-09-28 14:50:01 +02:00
|
|
|
* struct. If "base_struct_offset" is 0, then the base struct is the GObject pointer
|
|
|
|
|
* itself.
|
|
|
|
|
* If this is non-null, then we expect at that location a pointer to the offset.
|
|
|
|
|
* In this case we need to first find the base pointer via
|
|
|
|
|
* *((gpointer *) ((char *) nmobj + meta_iface->base_struct_offset)).
|
|
|
|
|
*
|
|
|
|
|
* This covers NMDeviceBridge._priv vs. NMDevice._priv. In the second case,
|
|
|
|
|
* _priv is a pointer that we first need to follow.
|
|
|
|
|
*/
|
2020-09-28 16:03:33 +02:00
|
|
|
guint8 base_struct_offset;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* We create the appropriate NMObject GType based on the D-Bus interfaces that
|
2020-09-28 14:50:01 +02:00
|
|
|
* are present. For example, if we see a "org.freedesktop.NetworkManager.Device.Bridge"
|
|
|
|
|
* interface, we create a NMDeviceBridge. Basically, if it looks like a certain
|
|
|
|
|
* object (based on the D-Bus interface), we assume it is.
|
|
|
|
|
*
|
|
|
|
|
* Some interfaces are purely additional ("org.freedesktop.NetworkManager.Device.Statistics")
|
|
|
|
|
* and don't determine the NMObject type (%NML_DBUS_META_INTERFACE_PRIO_NONE).
|
|
|
|
|
*
|
|
|
|
|
* Some interfaces are of a parent type ("org.freedesktop.NetworkManager.Device" for
|
|
|
|
|
* NMDevice), and don't determine the type either (%NML_DBUS_META_INTERFACE_PRIO_PARENT_TYPE).
|
|
|
|
|
*
|
|
|
|
|
* Some interfaces ("org.freedesktop.NetworkManager.AgentManager") belong to NMClient
|
|
|
|
|
* itself. Those have priority %NML_DBUS_META_INTERFACE_PRIO_NMCLIENT.
|
|
|
|
|
*
|
|
|
|
|
* In most cases, each D-Bus object is expected to have only one D-Bus interface
|
|
|
|
|
* to determine the type. While theoretically an object
|
|
|
|
|
* "/org/freedesktop/NetworkManager/Devices/3" could have interfaces "org.freedesktop.NetworkManager.Device.Bridge"
|
|
|
|
|
* and "org.freedesktop.NetworkManager.Device.Bond" at the same time, in practice it doesn't.
|
|
|
|
|
* Note that we also assume that once a D-Bus object gets a NMObject, it cannot change (*).
|
|
|
|
|
* NetworkManager's API does not add/remove interfaces after exporting the object the
|
|
|
|
|
* first time, so in practice each D-Bus object is expected to have a suitable D-Bus
|
|
|
|
|
* interface (and only determining interface, which doesn't change). Those interfaces have
|
|
|
|
|
* priority %NML_DBUS_META_INTERFACE_PRIO_INSTANTIATE_HIGH.
|
|
|
|
|
*
|
|
|
|
|
* (*) note that nothing bad would happen if a faulty NetworkManager would violate that.
|
|
|
|
|
* Of course, something would not work correctly, but the D-Bus interface we find is unexpected
|
|
|
|
|
* and wrong.
|
|
|
|
|
*
|
|
|
|
|
* One exception is "org.freedesktop.NetworkManager.Connection.Active". This can either
|
|
|
|
|
* be a NMActiveConnection or a NMVpnConnection. Hence, this profile has priority
|
|
|
|
|
* %NML_DBUS_META_INTERFACE_PRIO_INSTANTIATE_LOW, and depending on whether there is
|
|
|
|
|
* a "org.freedesktop.NetworkManager.VPN.Connection" (with high priority), we create
|
|
|
|
|
* one or the other type.
|
|
|
|
|
*/
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusMetaInteracePrio interface_prio : 3;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
};
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
#define NML_DBUS_META_IFACE_OBJ_PROPERTIES() \
|
|
|
|
|
.obj_properties = (const GParamSpec *const *) (obj_properties), \
|
|
|
|
|
.n_obj_properties = _PROPERTY_ENUMS_LAST, \
|
|
|
|
|
.obj_properties_reverse_idx = ((guint8[_PROPERTY_ENUMS_LAST]){})
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_IFACE_DBUS_PROPERTIES(...) \
|
|
|
|
|
.dbus_properties = ((const NMLDBusMetaProperty[]){__VA_ARGS__}), \
|
|
|
|
|
.n_dbus_properties = \
|
|
|
|
|
(sizeof((const NMLDBusMetaProperty[]){__VA_ARGS__}) / sizeof(NMLDBusMetaProperty))
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_IFACE_INIT(v_dbus_iface_name, v_get_type_fcn, v_interface_prio, ...) \
|
|
|
|
|
{ \
|
|
|
|
|
.dbus_iface_name = "" v_dbus_iface_name "", .get_type_fcn = v_get_type_fcn, \
|
|
|
|
|
.interface_prio = v_interface_prio, ##__VA_ARGS__ \
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#define NML_DBUS_META_IFACE_INIT_PROP(v_dbus_iface_name, v_get_type_fcn, v_interface_prio, ...) \
|
|
|
|
|
NML_DBUS_META_IFACE_INIT(v_dbus_iface_name, \
|
|
|
|
|
v_get_type_fcn, \
|
|
|
|
|
v_interface_prio, \
|
|
|
|
|
NML_DBUS_META_IFACE_OBJ_PROPERTIES(), \
|
|
|
|
|
##__VA_ARGS__)
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2019-12-05 10:36:54 +01:00
|
|
|
extern const NMLDBusMetaIface *const _nml_dbus_meta_ifaces[44];
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_accesspoint;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_agentmanager;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_checkpoint;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_connection_active;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_adsl;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_bluetooth;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_bond;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_bridge;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_dummy;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_generic;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_infiniband;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_iptunnel;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_lowpan;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_macsec;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_macvlan;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_modem;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_olpcmesh;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_ovsbridge;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_ovsinterface;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_ovsport;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_ppp;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_statistics;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_team;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_tun;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_veth;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_vlan;
|
2019-12-05 10:36:54 +01:00
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_vrf;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_vxlan;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_wifip2p;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_wired;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_wireguard;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_wireless;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_device_wpan;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_dhcp4config;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_dhcp6config;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_dnsmanager;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_ip4config;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_ip6config;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_settings;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_settings_connection;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_vpn_connection;
|
|
|
|
|
extern const NMLDBusMetaIface _nml_dbus_meta_iface_nm_wifip2ppeer;
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
const NMLDBusMetaIface *nml_dbus_meta_iface_get(const char *dbus_iface_name);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
const NMLDBusMetaProperty *nml_dbus_meta_property_get(const NMLDBusMetaIface *meta_iface,
|
|
|
|
|
const char * dbus_property_name,
|
|
|
|
|
guint * out_idx);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nml_dbus_meta_class_init_with_properties_impl(GObjectClass * object_class,
|
|
|
|
|
const NMLDBusMetaIface *const *meta_iface);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
#define _nml_dbus_meta_class_init_with_properties(object_class, ...) \
|
2020-09-28 16:03:33 +02:00
|
|
|
_nml_dbus_meta_class_init_with_properties_impl( \
|
|
|
|
|
(object_class), \
|
|
|
|
|
((const NMLDBusMetaIface *const[]){__VA_ARGS__, NULL}))
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
typedef enum {
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_DBUS_OBJ_STATE_UNLINKED = 0,
|
|
|
|
|
NML_DBUS_OBJ_STATE_WATCHED_ONLY,
|
|
|
|
|
NML_DBUS_OBJ_STATE_ON_DBUS,
|
|
|
|
|
NML_DBUS_OBJ_STATE_WITH_NMOBJ_NOT_READY,
|
|
|
|
|
NML_DBUS_OBJ_STATE_WITH_NMOBJ_READY,
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
} NMLDBusObjState;
|
|
|
|
|
|
|
|
|
|
typedef enum {
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_DBUS_OBJ_CHANGED_TYPE_NONE = 0,
|
|
|
|
|
NML_DBUS_OBJ_CHANGED_TYPE_DBUS = (1LL << 0),
|
|
|
|
|
NML_DBUS_OBJ_CHANGED_TYPE_NMOBJ = (1LL << 1),
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
} NMLDBusObjChangedType;
|
|
|
|
|
|
|
|
|
|
struct _NMLDBusObject {
|
2020-09-28 16:03:33 +02:00
|
|
|
NMRefString *dbus_path;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* While the object is tracked by NMClient, it is linked with this list.
|
2020-09-28 14:50:01 +02:00
|
|
|
* The lists are partitioned based on the NMLDBusObjState. */
|
2020-09-28 16:03:33 +02:00
|
|
|
CList dbus_objects_lst;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* The list of D-Bus interface NMLDBusObjIfaceData.
|
2020-09-28 14:50:01 +02:00
|
|
|
*
|
|
|
|
|
* Some may be about to be removed (iface_removed) or
|
|
|
|
|
* unknown (!dbus_iface_is_wellknown). */
|
2020-09-28 16:03:33 +02:00
|
|
|
CList iface_lst_head;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* The list of registered NMLDBusObjWatcher. */
|
|
|
|
|
CList watcher_lst_head;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
/* When an object changes (e.g. because of new information on D-Bus), we often
|
2020-09-28 14:50:01 +02:00
|
|
|
* don't process the changes right away, but enqueue the object in a changed
|
|
|
|
|
* list. This list goes together with obj_changed_type property below, which
|
|
|
|
|
* tracks what changed. */
|
2020-09-28 16:03:33 +02:00
|
|
|
CList obj_changed_lst;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
GObject *nmobj;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
int ref_count;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusObjState obj_state : 4;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusObjChangedType obj_changed_type : 3;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static inline gboolean
|
2020-09-28 16:03:33 +02:00
|
|
|
NML_IS_DBUS_OBJECT(NMLDBusObject *dbobj)
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
{
|
2020-09-28 16:03:33 +02:00
|
|
|
nm_assert(!dbobj || (NM_IS_REF_STRING(dbobj->dbus_path) && dbobj->ref_count > 0));
|
|
|
|
|
nm_assert(!dbobj || !dbobj->nmobj || NM_IS_OBJECT(dbobj->nmobj) || NM_IS_CLIENT(dbobj->nmobj));
|
|
|
|
|
return !!dbobj;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
}
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusObject *nml_dbus_object_ref(NMLDBusObject *dbobj);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void nml_dbus_object_unref(NMLDBusObject *dbobj);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NM_AUTO_DEFINE_FCN0(NMLDBusObject *, _nm_auto_unref_nml_dbusobj, nml_dbus_object_unref);
|
|
|
|
|
#define nm_auto_unref_nml_dbusobj nm_auto(_nm_auto_unref_nml_dbusobj)
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gpointer nml_dbus_object_get_property_location(NMLDBusObject * dbobj,
|
|
|
|
|
const NMLDBusMetaIface * meta_iface,
|
|
|
|
|
const NMLDBusMetaProperty *meta_property);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
/* NMClient is not an NMObject, but in some aspects we want to track it like
|
|
|
|
|
* an NMObject. For that, both NMClient and NMObject "implement" NMObjectBase,
|
|
|
|
|
* despite not actually implementing such a GObject type. */
|
|
|
|
|
typedef struct {
|
2020-09-28 16:03:33 +02:00
|
|
|
GObject parent;
|
|
|
|
|
CList queue_notify_lst;
|
|
|
|
|
bool is_disposing : 1;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
} NMObjectBase;
|
|
|
|
|
|
|
|
|
|
typedef struct {
|
2020-09-28 16:03:33 +02:00
|
|
|
GObjectClass parent;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
} NMObjectBaseClass;
|
|
|
|
|
|
2019-10-18 08:12:01 +02:00
|
|
|
struct _NMObjectPrivate;
|
|
|
|
|
|
|
|
|
|
struct _NMObject {
|
2020-09-28 16:03:33 +02:00
|
|
|
union {
|
|
|
|
|
GObject parent;
|
|
|
|
|
NMObjectBase obj_base;
|
|
|
|
|
};
|
|
|
|
|
struct _NMObjectPrivate *_priv;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
typedef struct _NMObjectClassFieldInfo {
|
2020-09-28 16:03:33 +02:00
|
|
|
const struct _NMObjectClassFieldInfo *parent;
|
|
|
|
|
NMObjectClass * klass;
|
|
|
|
|
guint16 offset;
|
|
|
|
|
guint16 num;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
} _NMObjectClassFieldInfo;
|
|
|
|
|
|
2019-10-18 08:12:01 +02:00
|
|
|
struct _NMObjectClass {
|
2020-09-28 16:03:33 +02:00
|
|
|
union {
|
|
|
|
|
GObjectClass parent;
|
|
|
|
|
NMObjectBaseClass obj_base;
|
|
|
|
|
};
|
2019-10-18 08:12:01 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void (*register_client)(NMObject *self, NMClient *client, NMLDBusObject *dbobj);
|
2019-10-18 08:12:01 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void (*unregister_client)(NMObject *self, NMClient *client, NMLDBusObject *dbobj);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gboolean (*is_ready)(NMObject *self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void (*obj_changed_notify)(NMObject *self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
const _NMObjectClassFieldInfo *property_o_info;
|
|
|
|
|
const _NMObjectClassFieldInfo *property_ao_info;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
guint16 priv_ptr_offset;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
bool priv_ptr_indirect : 1;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
#define _NM_OBJECT_CLASS_INIT_PRIV_PTR_DIRECT(nm_object_class, type_name) \
|
|
|
|
|
G_STMT_START \
|
|
|
|
|
{ \
|
|
|
|
|
(nm_object_class)->priv_ptr_offset = \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(type_name##Private, type_name, _priv); \
|
|
|
|
|
(nm_object_class)->priv_ptr_indirect = FALSE; \
|
|
|
|
|
} \
|
|
|
|
|
G_STMT_END
|
|
|
|
|
|
|
|
|
|
#define _NM_OBJECT_CLASS_INIT_PRIV_PTR_INDIRECT(nm_object_class, type_name) \
|
|
|
|
|
G_STMT_START \
|
|
|
|
|
{ \
|
|
|
|
|
(nm_object_class)->priv_ptr_offset = \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(type_name##Private *, type_name, _priv); \
|
|
|
|
|
(nm_object_class)->priv_ptr_indirect = TRUE; \
|
|
|
|
|
} \
|
|
|
|
|
G_STMT_END
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
#define _NM_OBJECT_CLASS_INIT_FIELD_INFO(_nm_object_class, _field_name, _offset, _num) \
|
2020-09-28 16:03:33 +02:00
|
|
|
G_STMT_START \
|
|
|
|
|
{ \
|
|
|
|
|
(_nm_object_class)->_field_name = ({ \
|
|
|
|
|
static _NMObjectClassFieldInfo _f; \
|
|
|
|
|
\
|
|
|
|
|
_f = (_NMObjectClassFieldInfo){ \
|
|
|
|
|
.parent = (_nm_object_class)->_field_name, \
|
|
|
|
|
.klass = (_nm_object_class), \
|
|
|
|
|
.offset = _offset, \
|
|
|
|
|
.num = _num, \
|
|
|
|
|
}; \
|
|
|
|
|
&_f; \
|
|
|
|
|
}); \
|
|
|
|
|
} \
|
|
|
|
|
G_STMT_END
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
#define _NM_OBJECT_CLASS_INIT_PROPERTY_O_FIELDS_1(nm_object_class, type_name, field_name) \
|
2020-09-28 16:03:33 +02:00
|
|
|
_NM_OBJECT_CLASS_INIT_FIELD_INFO( \
|
|
|
|
|
nm_object_class, \
|
|
|
|
|
property_o_info, \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(NMLDBusPropertyO, type_name, field_name), \
|
|
|
|
|
1)
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
#define _NM_OBJECT_CLASS_INIT_PROPERTY_O_FIELDS_N(nm_object_class, type_name, field_name) \
|
2020-09-28 16:03:33 +02:00
|
|
|
_NM_OBJECT_CLASS_INIT_FIELD_INFO( \
|
|
|
|
|
nm_object_class, \
|
|
|
|
|
property_o_info, \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(NMLDBusPropertyO *, type_name, field_name), \
|
|
|
|
|
G_N_ELEMENTS(((type_name *) NULL)->field_name))
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
#define _NM_OBJECT_CLASS_INIT_PROPERTY_AO_FIELDS_1(nm_object_class, type_name, field_name) \
|
2020-09-28 16:03:33 +02:00
|
|
|
_NM_OBJECT_CLASS_INIT_FIELD_INFO( \
|
|
|
|
|
nm_object_class, \
|
|
|
|
|
property_ao_info, \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(NMLDBusPropertyAO, type_name, field_name), \
|
|
|
|
|
1)
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
#define _NM_OBJECT_CLASS_INIT_PROPERTY_AO_FIELDS_N(nm_object_class, type_name, field_name) \
|
2020-09-28 16:03:33 +02:00
|
|
|
_NM_OBJECT_CLASS_INIT_FIELD_INFO( \
|
|
|
|
|
nm_object_class, \
|
|
|
|
|
property_ao_info, \
|
|
|
|
|
NM_STRUCT_OFFSET_ENSURE_TYPE(NMLDBusPropertyAO *, type_name, field_name), \
|
|
|
|
|
G_N_ELEMENTS(((type_name *) NULL)->field_name))
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2019-10-18 08:12:01 +02:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
struct _NMDevicePrivate;
|
|
|
|
|
|
|
|
|
|
struct _NMDevice {
|
2020-09-28 16:03:33 +02:00
|
|
|
NMObject parent;
|
|
|
|
|
struct _NMDevicePrivate *_priv;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct _NMDeviceClass {
|
2020-09-28 16:03:33 +02:00
|
|
|
struct _NMObjectClass parent;
|
2019-10-18 08:12:01 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gboolean (*connection_compatible)(NMDevice *device, NMConnection *connection, GError **error);
|
2019-10-18 08:12:01 +02:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
const char *(*get_type_description)(NMDevice *device);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
GType (*get_setting_type)(NMDevice *device);
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
struct _NMActiveConnectionPrivate;
|
|
|
|
|
|
|
|
|
|
struct _NMActiveConnection {
|
2020-09-28 16:03:33 +02:00
|
|
|
NMObject parent;
|
|
|
|
|
struct _NMActiveConnectionPrivate *_priv;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct _NMActiveConnectionClass {
|
2020-09-28 16:03:33 +02:00
|
|
|
struct _NMObjectClass parent;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
struct _NMDhcpConfigPrivate;
|
|
|
|
|
|
|
|
|
|
struct _NMDhcpConfig {
|
2020-09-28 16:03:33 +02:00
|
|
|
NMObject parent;
|
|
|
|
|
struct _NMDhcpConfigPrivate *_priv;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct _NMDhcpConfigClass {
|
2020-09-28 16:03:33 +02:00
|
|
|
struct _NMObjectClass parent;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
struct _NMIPConfigPrivate;
|
|
|
|
|
|
|
|
|
|
struct _NMIPConfig {
|
2020-09-28 16:03:33 +02:00
|
|
|
NMObject parent;
|
|
|
|
|
struct _NMIPConfigPrivate *_priv;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct _NMIPConfigClass {
|
2020-09-28 16:03:33 +02:00
|
|
|
struct _NMObjectClass parent;
|
2019-10-18 08:12:01 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMLDBusObject *_nm_object_get_dbobj(gpointer self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
const char *_nm_object_get_path(gpointer self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMClient *_nm_object_get_client(gpointer self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
GDBusConnection *_nm_client_get_dbus_connection(NMClient *client);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
const char *_nm_client_get_dbus_name_owner(NMClient *client);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
GMainContext *_nm_client_get_context_main(NMClient *client);
|
|
|
|
|
GMainContext *_nm_client_get_context_dbus(NMClient *client);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nm_client_queue_notify_object(NMClient *client, gpointer nmobj, const GParamSpec *pspec);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nm_client_notify_object_changed(NMClient *self, NMLDBusObject *dbobj);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
struct udev *_nm_client_get_udev(NMClient *self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
#define NM_CLIENT_NOTIFY_EVENT_PRIO_BEFORE (-100)
|
2020-09-28 16:03:33 +02:00
|
|
|
#define NM_CLIENT_NOTIFY_EVENT_PRIO_GPROP 0
|
|
|
|
|
#define NM_CLIENT_NOTIFY_EVENT_PRIO_AFTER 100
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
typedef struct _NMClientNotifyEvent NMClientNotifyEvent;
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
typedef void (*NMClientNotifyEventCb)(NMClient *self, gpointer notify_event);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
struct _NMClientNotifyEvent {
|
2020-09-28 16:03:33 +02:00
|
|
|
CList lst;
|
|
|
|
|
NMClientNotifyEventCb callback;
|
|
|
|
|
int priority;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
};
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
gpointer _nm_client_notify_event_queue(NMClient * self,
|
|
|
|
|
int priority,
|
|
|
|
|
NMClientNotifyEventCb callback,
|
|
|
|
|
gsize event_size);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
typedef struct _NMClientNotifyEventWithPtr NMClientNotifyEventWithPtr;
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
typedef void (*NMClientNotifyEventWithPtrCb)(NMClient * self,
|
|
|
|
|
NMClientNotifyEventWithPtr *notify_event);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
struct _NMClientNotifyEventWithPtr {
|
2020-09-28 16:03:33 +02:00
|
|
|
NMClientNotifyEvent parent;
|
|
|
|
|
gpointer user_data;
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
};
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
NMClientNotifyEventWithPtr *
|
|
|
|
|
_nm_client_notify_event_queue_with_ptr(NMClient * self,
|
|
|
|
|
int priority,
|
|
|
|
|
NMClientNotifyEventWithPtrCb callback,
|
|
|
|
|
gpointer user_data);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nm_client_notify_event_queue_emit_obj_signal(NMClient *self,
|
|
|
|
|
GObject * source,
|
|
|
|
|
NMObject *nmobj,
|
|
|
|
|
gboolean is_added /* or else removed */,
|
|
|
|
|
int prio_offset,
|
|
|
|
|
guint signal_id);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
GError *_nm_client_new_error_nm_not_running(void);
|
|
|
|
|
GError *_nm_client_new_error_nm_not_cached(void);
|
|
|
|
|
|
|
|
|
|
void _nm_client_dbus_call_simple(NMClient * self,
|
|
|
|
|
GCancellable * cancellable,
|
|
|
|
|
const char * object_path,
|
|
|
|
|
const char * interface_name,
|
|
|
|
|
const char * method_name,
|
|
|
|
|
GVariant * parameters,
|
|
|
|
|
const GVariantType *reply_type,
|
|
|
|
|
GDBusCallFlags flags,
|
|
|
|
|
int timeout_msec,
|
|
|
|
|
GAsyncReadyCallback callback,
|
|
|
|
|
gpointer user_data);
|
|
|
|
|
|
|
|
|
|
void _nm_client_dbus_call(NMClient * self,
|
|
|
|
|
gpointer source_obj,
|
|
|
|
|
gpointer source_tag,
|
|
|
|
|
GCancellable * cancellable,
|
|
|
|
|
GAsyncReadyCallback user_callback,
|
|
|
|
|
gpointer user_callback_data,
|
|
|
|
|
const char * object_path,
|
|
|
|
|
const char * interface_name,
|
|
|
|
|
const char * method_name,
|
|
|
|
|
GVariant * parameters,
|
|
|
|
|
const GVariantType *reply_type,
|
|
|
|
|
GDBusCallFlags flags,
|
|
|
|
|
int timeout_msec,
|
|
|
|
|
GAsyncReadyCallback internal_callback);
|
|
|
|
|
|
|
|
|
|
GVariant *_nm_client_dbus_call_sync(NMClient * self,
|
|
|
|
|
GCancellable * cancellable,
|
|
|
|
|
const char * object_path,
|
|
|
|
|
const char * interface_name,
|
|
|
|
|
const char * method_name,
|
|
|
|
|
GVariant * parameters,
|
|
|
|
|
const GVariantType *reply_type,
|
|
|
|
|
GDBusCallFlags flags,
|
|
|
|
|
int timeout_msec,
|
|
|
|
|
gboolean strip_dbus_error,
|
|
|
|
|
GError ** error);
|
|
|
|
|
|
|
|
|
|
gboolean _nm_client_dbus_call_sync_void(NMClient * self,
|
|
|
|
|
GCancellable * cancellable,
|
|
|
|
|
const char * object_path,
|
|
|
|
|
const char * interface_name,
|
|
|
|
|
const char * method_name,
|
|
|
|
|
GVariant * parameters,
|
|
|
|
|
GDBusCallFlags flags,
|
|
|
|
|
int timeout_msec,
|
|
|
|
|
gboolean strip_dbus_error,
|
|
|
|
|
GError ** error);
|
|
|
|
|
|
|
|
|
|
void _nm_client_set_property_sync_legacy(NMClient * self,
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
const char *object_path,
|
2020-09-28 16:03:33 +02:00
|
|
|
const char *interface,
|
|
|
|
|
const char *prop_name,
|
|
|
|
|
const char *format_string,
|
|
|
|
|
...);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nm_client_get_settings_call(NMClient *self, NMLDBusObject *dbobj);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
GCancellable *_nm_remote_settings_get_settings_prepare(NMRemoteConnection *self);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nm_remote_settings_get_settings_commit(NMRemoteConnection *self, GVariant *settings);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void
|
|
|
|
|
_nm_active_connection_state_changed_commit(NMActiveConnection *self, guint32 state, guint32 reason);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
2020-09-28 16:03:33 +02:00
|
|
|
void _nm_vpn_connection_state_changed_commit(NMVpnConnection *self, guint32 state, guint32 reason);
|
libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5bf ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f9085d, current tip of nm-1-20 branch
[2] 4fad8c7c642e, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c642e && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
2019-10-30 11:42:58 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2020-03-05 18:56:07 +01:00
|
|
|
NMLDBusNotifyUpdatePropFlags
|
2020-09-28 16:03:33 +02:00
|
|
|
_nm_device_notify_update_prop_hw_address(NMClient * client,
|
|
|
|
|
NMLDBusObject * dbobj,
|
|
|
|
|
const NMLDBusMetaIface *meta_iface,
|
|
|
|
|
guint dbus_property_idx,
|
|
|
|
|
GVariant * value);
|
2020-03-05 18:56:07 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2017-05-19 10:19:25 +02:00
|
|
|
#endif /* __NM_LIBNM_UTILS_H__ */
|