We have "shared/nm-libnm-core-aux", which is shared code that can be used
by anybody (including libnm-core, src, libnm and clients).
We have "clients/common", which are helper function for clients. But
that implies that the code is inside "clients". I think it would be
useful to have auxiliary code that extends libnm, but is not only
usable by code in "clients". In other words, "shared/nm-libnm-aux"
is a better place than "clients/common", and I think most of the
functionality form "clients/common" should move there.
Currently, NMClient by default always fetches the permissions
("GetPermissions()") and refreshes them on "CheckPermissions" signal.
Fetching permissions is relatively expensive, while they are not used
most of the time. Allow the user to opt out of this.
For that, have a NMClientInstanceFlags to enable/disable automatic
fetching. Also add a "permissions-state" property that allows the user
to understand whether the cached permissions are up to date or not.
This is a bit an awkward API for handling this. E.g. you cannot
explicitly request permissions, you can just enable/disable fetching
permissions. And then you can watch the permission-state to know whether
you are ready. It's done this way because it fits the previous model
and extends the API with a (relative) small amount of new functions and
properties.
Add a flags property to control behavior of NMClient.
Possible future use cases:
- currently it would always automatically fetch permissions. Often that
is not used and the user could opt out of it.
- currently, using sync init creates an internal GMainContext. This
has an overhead and may be undesirable. We could implement another
"sync" initialization that would merely iterate the callers mainloop
until the initialization completes. A flag would allow to opt in.
- currently, NMClient always fetches all connection settings
automatically. Via a flag the user could opt out of that.
Instead NMClient could provide an API so the user can request
settings as they are needed.
On D-Bus, the permission names are just the PolicyKit action names, like
"org.freedesktop.NetworkManager.wifi.scan". But NMClient already
ignores all strings that it doesn't know at compile time and only
keeps track of well known permission.
And neither does the API nm_client_get_permissions_result() allow to
expose permissions unknown to libnm.
Maybe the API of NMClient should be more generic and allow exposing
any permissions announced by NetworkManager. As it is however, it's
not necessary to track the permissions in a hash table. An array with
fixed indices is sufficient.
Not being able to compare two NMIPAddress instances is a major
limitation. Add nm_ip_address_cmp_full(). The choice here for adding
a "cmp()" function instead of a "equals()" function is that cmp is
more useful. We only want to add one of the two, so choose the
more powerful one. Yes, usually its also not the variant we want
or the variant that is convenient to use, such is life.
Compare this to:
- nm_ip_route_equal_full(), which is an equal() method and not
a cmp().
- nm_ip_route_equal_full() which has a guint flags argument,
instead of a typedef for an enum, with a proper generated
GType.
I have a coredump that seems to indicate that nm_device_get_active_connection()
did not return a valid object. Let's add an assertion, trying to identify the
issue earlier. Aside from that, this change isn't useful, but an nm_assert()
shouldn't hurt anyway.
The NMClient is associated with a certain context. Add a getter
function to give the context.
The context is really not internal API of NMClient, that is because
the user must iterate this context and be aware of it.
Usually, the nmobj never gets reused for one dbobj. That means,
we really don't expect a nml_dbus_property_o_notify() for a property
that was already cleared.
However, that is for example not the case with NMClient itself. As NetworkManager
gets restarted, the name owner gets lost, the property cleared but afterwards
it might get notified again.
That means, nml_dbus_property_o_notify() and nml_dbus_property_o_clear() must
work well together, otherwise a sequence of
nml_dbus_property_o_notify()
nml_dbus_property_o_clear()
nml_dbus_property_o_notify()
leads to an assertion failure "nm_assert (!pr_o->is_ready)".
Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
NMClient makes asynchronous D-Bus calls via g_dbus_connection_call().
This references the current GMainContext to later invoke the
asynchronous callback. Even when we cancel the asynchronous call,
the callback will still be invoked (later) to complete the request.
In particular this means when we destroy (unref) an NMClient, there
are quite possibly pending requests in the GMainContext. Although they
are cancelled, they keep the GMainContext alive.
With synchronous initialization, we have an internal GMainContext.
When we destroy the NMClient, we cannot just unhook the integrated
source, instead, we need to keep it integrated in the caller's main
context, as long as there are pending requests.
Add a mechanism to track those pending requests and fix the leak for the
internal GMainContext. Also expose the same mechanism to the user via a new
API called nm_client_get_context_busy_watcher(). This allows the user
to know when it can stop iterating the main context and when all
resources are reclaimed.
For example the following will lead to a crash:
for i in range(1,2000):
nmc = NM.Client.new(None)
This creates a number of NMClient instances and destroys them again.
Note that here the GMainContext is never iterated, because
synchronous initialization does not iterate the caller's context. So,
while we correctly unref and dispose the created NMClient instances,
there are pending requests left in the inner GMainContext. These pile
up and soon the program will crash because it runs out of file descriptors.
We can have a similar problem with asynchronous initialization, when
we create a new GMainContext per client, and don't iterate it after
we are done with the client.
Note that this patch does not avoid the problem in general. The problem
cannot be avoided, the user must iterate the main contex at some point.
Otherwise resources (memory and file descriptors) will be leaked.
Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/347
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.
CHANGES
-------
- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.
- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.
- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.
- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.
- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.
- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.
- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.
- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.
- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.
- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).
- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.
- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.
- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.
- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5 ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.
- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.
NUMBERS
-------
I compared 3 versions of libnm.
[1] 962297f908, current tip of nm-1-20 branch
[2] 4fad8c7c64, current master, immediate parent of this patch
[3] this patch
All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with
$ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug
Note that RPM build already stripped the library.
---
N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
in these tests it had large (and bogus) ELF notes. Anyway, the point
is to show the relative sizes, so it doesn't matter).
[1] 4075552 (102.7%)
[2] 3969624 (100.0%)
[3] 3705208 ( 93.3%)
---
N2) `size /usr/lib64/libnm.so.0.1.0`:
text data bss dec hex filename
[1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0
[2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0
[3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0
---
N3) Performance test with test-client.py. With checkout of [2], run
```
prepare_checkout() {
rm -rf /tmp/nm-test && \
git checkout -B test 4fad8c7c64 && \
git clean -fdx && \
./autogen.sh --prefix=/tmp/nm-test && \
make -j 5 install && \
make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
for i in {1..10}; do
NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
done
echo "done!"
}
prepare_checkout
prepare_test
time do_test
```
[1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%)
[2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%)
[3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%)
---
N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.
```
do_cleanup() {
for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
nmcli connection delete uuid "$UUID"
done
for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
nmcli device delete "$DEVICE"
done
}
do_setup() {
do_cleanup
for i in {1..30}; do
nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
for j in $(seq $i 30); do
nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore
done
done
systemctl restart NetworkManager.service
sleep 5
}
do_test() {
perf stat -r 50 -B nmcli 1>/dev/null
}
do_test
```
[1]
Performance counter stats for 'nmcli' (50 runs):
456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,900 page-faults:u # 0.013 M/sec ( +- 0.02% )
1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% )
1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% )
368,744,018 branches:u # 808.061 M/sec ( +- 0.02% )
4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% )
0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% )
[2]
Performance counter stats for 'nmcli' (50 runs):
477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
5,948 page-faults:u # 0.012 M/sec ( +- 0.03% )
1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% )
1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% )
382,595,152 branches:u # 800.433 M/sec ( +- 0.02% )
4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% )
0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% )
[3]
Performance counter stats for 'nmcli' (50 runs):
352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
4,790 page-faults:u # 0.014 M/sec ( +- 0.26% )
1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% )
1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% )
281,708,462 branches:u # 799.499 M/sec ( +- 0.01% )
3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% )
0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% )
---
N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:
[1]
Command being timed: "nmcli"
User time (seconds): 0.42
System time (seconds): 0.04
Percent of CPU this job got: 107%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34456
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6128
Voluntary context switches: 1298
Involuntary context switches: 1106
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[2]
Command being timed: "nmcli"
User time (seconds): 0.44
System time (seconds): 0.04
Percent of CPU this job got: 108%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 34452
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6169
Voluntary context switches: 1849
Involuntary context switches: 142
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
[3]
Command being timed: "nmcli"
User time (seconds): 0.32
System time (seconds): 0.02
Percent of CPU this job got: 102%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 29196
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 5059
Voluntary context switches: 919
Involuntary context switches: 685
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
---
N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
the RSS size.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor
[2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor
[3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
Add a new read-only "InterfaceFlags" property to the Device interface
to export via D-Bus kernel flags and possibly other NM specific
flags. At the moment IFF_UP and IFF_LOWERUP are implemented.
With this test the stub service simulates a failure to add-and-activate
the connection.
However the implementation of the stub service was not simulating the
real behavior of NetworkManager service. libnm will add the possibility
to assert against invalid server behavior by setting LIBNM_CLIENT_DEBUG=error.
With that change, libnm will complain that the stub service behaves
invalid, and rightly so.
Instead of fixing the test, just drop it.
libnm is gonna change, where it would still emit signals when the
instance gets destructed. In particular, when the device gets removed
from the NMClient cache, the references to other objects would be
cleared (and consequently property changed signals emitted).
This will cause a test failure, because the signal was not unsubscribed:
test:ERROR:libnm/tests/test-nm-client.c:694:device_ac_changed_cb: assertion failed: (nm_device_get_active_connection (NM_DEVICE (device)) != NULL)
The advantage of nmtstc_client_new() is that it randomly either uses the
synchronous or asynchronous constructor. Of course, both should behave
pretty much the same. Hence, this increases our test coverage.
The server doesn's support WiMAX anymore. Hence there is no point in keeping
this functionality. While we cannot drop the functions, let them not do anything.
The code in NMManager is still there. But since we will soon drop
NMManager entirely, it doesn't matter.
It's nicely trivial and independent. Move it to a separate place,
to avoid cluttering the more complicated code and to make it testable.
Also, use binary search to find the value by string.
These setters not only invoke a synchronous D-Bus call (ignoring the
return value). They also modify the content of the cache client-side,
bypassing the information that we receive via notifications from the
server.
Also, they don't emit property changed signals, but in any case they
are broken beyond repair.
Fully mark them as deprecated. Note that they were already marked as
_NM_DEPRECATED_SYNC_METHOD. However, that does not actually mark
the API as deprecated, because fully deprecating all synchronous
methods is premature at this point.
It's useful, because it's easy to get overwhelemed by the logging output.
The timestamp makes it easier to keep track. Also, it allows to see how long
things take.
NMDeviceVxlan has some "q" type properties. They were not handled:
$ G_MESSAGES_DEBUG=all PAGER= LIBNM_GLIB_DEBUG=properties-changed nmcli 2>&1 | grep "couldn't be set from D-Bus type"
libnm-Message: 10:44:04.538: demarshal_generic: NMDeviceVxlan:dst-port (type guint) couldn't be set from D-Bus type q.
libnm-Message: 10:44:04.538: demarshal_generic: NMDeviceVxlan:src-port-max (type guint) couldn't be set from D-Bus type q.
libnm-Message: 10:44:04.538: demarshal_generic: NMDeviceVxlan:src-port-min (type guint) couldn't be set from D-Bus type q.
libnm-Message: 10:44:04.539: demarshal_generic: NMDeviceWireGuard:listen-port (type guint) couldn't be set from D-Bus type q.
NM 1.22 is not released yet and 1.20.6 will happen before 1.22.0, so
we can introduce the new API with version libnm_1_20_6 in both
releases without having duplicate symbols on 1.22.
Our NMObject implementations should behave in a similar manner.
For example, string properties should be coerced the a consistent
manner.
Add functions _nml_coerce_property_*() for that. Of course, they
are trivial. Their value is not that they encapsulate some complex
implementation, but that they convey a general concept of how we
want to handle certain properties in NMClient's object cache.
Default values should preferably be zero and/or a value that indicates
that the property is unknown/unset.
In practice, this property is not unset because it's present
on the D-Bus API.
Yes, by default (server side) devices do autoconnect.
But that does not mean an NMObject, that has its GObject property
not set via D-Bus shall default to TRUE.
Default values preferably should be FALSE, because that is what we get
by default (memset(0)).
NMAccessPoint is an NMObject, and exclusively created and initialized by
NMClient. In practice, the D-Bus property is always present on D-Bus, so
the default value is not used. However, a better default is anyway
"unknown", also because that has zero numeric value.
WiMAX is deprecated since NetworkManager 1.2.0. Note that also
NetworkManager on server side no longer supports this type, hence
the server's D-Bus API will never expose devices of this type.
Note that NMDeviceWimax and NMWimaxNsp are NMObject types. That means,
they are instantiated by NMClient to represent information on the D-Bus
interface. As NetworkManager no longer exposes WiMAX devices, such
devices are never created. Note that it makes no sense that a user would
directly instantiate NMObject types, because they only work together with
NMClient.
Don't drop the related symbols and definitions from libnm, so that there
is no API/ABI change (as far as building and linking is concerned). But
make the types defunctional (which of course is a behavioral API change).
Calling the API now triggers a g_return_*() warning.
Also belatedly mark the WimaxNsp API as deprecated. It should have been
done in 1.2. Note that here we deprecate the API and retire it at the
same time. Optimally, we would have deprecated it a few releases ago,
before retiring it. However, marking something for deprecation is anyway
no excuse for anything. I mean, removing or retiring API is usually
painful, regardless whether it was marked for deprecation or not. In this
case, there is no possibility that a libnm user gets hold on a NMDeviceWimax
or NMWimaxNsp instance, because NMClient simply no longer instantiates
them. Hence, this change should not affect any user in practice.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/316
These types are all subclasses of NMObject. These instances are commonly
created by NMClient itself. It makes no sense that a user would
instantiate the type. Much less does it make sense to subclass them.
Hide the object and class structures from public API.
This is an API and ABI break, but of something that is very likely
unused.
This is mainly done to embed the private structure in the object itself.
This has benefits for performance and debugability. But most
importantly, we can obtain a static offset where to access the private data.
That means, we can use the information to access the data pointer
generically, as we will need later.
This is not done for the internal types NMManager, NMRemoteSettings,
and NMDnsManager. These types will be dropped later.
Having the NMClient/NMClientClass structs in the public header allows
the user to subclass these types. Subclassing this type was never
intended, nor is it supported, nor does it seem useful. Subclassing only
makes sense if the type has suitable hooks to extend the type in a
meaningful way. NMClient hasn't, and everybody trying to derive from
this class would better delegate the actions.
Also, having these structs in the public header prevents us from
embedding the private data in the object structure itself.
It has thus an runtime overhead and is less convenient for debugging (it's
hard to find the private data pointer in gdb).
Most importantly, there is no easy way to find the offset of the private
data fields, short of calling NM_CLIENT_GET_PRIVATE() -- which currently
is a macro. Later we want to generically lookup the offset of the
private data, we would need NM_CLIENT_GET_PRIVATE() as a function.
Instead, by having an internally, statically known offset, we can use
that offset instead.
Also drop all signal hooks. They are also not useful.
This is an ABI and API change, but of something that we never wanted to
be part of the ABI/API, and which hopefull nobody is using.
We now include "libnm/nm-libnm-utils.h" for all compilation of libnm sources.
Let that one also include "nm-types.h". In the end, it's anyway needed
almost everywhere.
The majority of sources in "libnm/" are implementations of NMObject.
"nm-libnm-utils.h" will contain common definitions for handling such
objects. This means, most of the source files under libnm will require
this include. Include it by default.
Commonly, a library (like libnm) is not supposed to log anything.
Logging is not a suitable way to notify the calling application
about anything. When something of importance happens, then the
application must be notified via the library's API.
However, logging can be very useful for debugging to see what is going
on. Add a logging macro that by default does nothing, but can be turned
on via an environment variable "LIBNM_CLIENT_DEBUG=debug".
Another point is that libnm relies on the server side NetworkManager
D-Bus interface to be in an expected manner. For example, we require a
D-Bus object "org.freedesktop.NetworkManager" to be present and certain
D-Bus interfaces implemented.
However libnm should treat NetworkManager as external and untrusted component.
That means, we cannot assert against the expectations we have. There are two
reasons for this:
- a bug in NetworkManager, dbus-daemon or else may cause such errors.
This must not trigger an assertion failure in the client
application, at least not unless requested.
- libnm must be forward and backward compatible against a different
NetworkManager server version. That is only possibly by ignoring
anything that is unexpected. Asserting by default might prevent
to implement API changes, both on libnm and server side.
Note that we also don't notify the calling application via dedicated
API. On the one hand, these things *can* happen. On the other hand, what
would the calling appication do about it anyway? libnm by default must
just behave gracefully and pretend all is good.
For testing, development and debugging that is however not useful. We
want the user to opt in to strict API validation. The user will be able
to do that by setting "LIBNM_CLIENT_DEBUG=warning", which causes API
violations being logged with g_warning(). These are assertions when
running with G_DEBUG=fatal-warnings.
This is inspired by GDBus' G_DBUS_DEBUG variable.
Note that LIBNM_CLIENT_DEBUG environment variables is undocumented, unstable
API. It's used for debugging and testing of the current libnm version at hand.
There is no guaranteed stable behavior how a different libnm version
might behave.
It's not yet implemented. But obviously it's interesting to
get the name owner to which the NMClient is currently connected.
Note only that: the name-owner property really says whether
NM is currently running or not.
The used GDBusConnection should be configurable when creating the
NMClient instance. Automatically choosing one of the g_bus_get()
singletons is fine by default, but it's an unnecessary limitation.
We will require this later. The NMClient shall be tied to the GMainContext
at the moment when the instance gets created. This allows the user to have
multiple, indendent NMClient instances (on different threads and GMainContext).
Currently this is still unused, it will be later.
This looks up the GParamSpec from the obj_properties and is
thus more efficient. Also, the generated _notify() function
has the proper argument type and is thus generally preferable.