The test tries to do IPv4 DAD. That necessarily involves waiting
for a timeout. Since the NMArpingManager spawns arping processes,
the precise timings depend on the load of the machine and may be
large in some cases.
Usually, our test would run fast to successful completion.
However, sometimes, it can take several hundered milliseconds.
Instead of increasing the timeout to a large value (which would
needlessly extend the run time of our tests in the common cases),
try first with a reasonably short timeout. A timeout which commonly
results in success. If the test with the short timeout fails, just
try again with an excessively large timeout.
This saves about 400 msec for the common case, but extends the
races that we saw where not even 250 msec of wait time were
sufficient.
Previously we would kill the client when the lease expired and we
restarted it 3 times at 2 minutes intervals before failing the
connection. If the client is killed after it received a NACK from the
server, it doesn't have the chance to delete the lease file and the
next time it is started it will request the same lease again.
Also, the previous restart logic is a bit convoluted.
Since clients already know how to deal with NACKs, let them continue
for a grace period after the expiry. When the grace period ends, we
fail the method and this can either fail the whole connection or keep
it active depending on the may-fail configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=783391
I dislike the static hash table to cache the integer counter for
numbered paths. Let's instead cache the counter at the class instance
itself -- since the class contains the information how the export
path should be exported.
However, we cannot use a plain integer field inside the class structure,
because the class is copied between derived classes. For example,
NMDeviceEthernet and NMDeviceBridge both get a copy of the NMDeviceClass
instance. Hence, the class doesn't contain the counter directly, but
a pointer to one counter that can be shared between sibling classes.
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
The change doesn't really make a difference. I thought it would, so I
did it. But turns out (as the code correctly assumes), while the
notifications are frozen, it's OK to leave the property still in an
inconsistent state while emitting the notify signal.
Still, it feels slightly more correct this way, so keep the change.
The notify() signal is not emitted while the object properties are
blocked via g_object_freeze_notify(). That makes is unsuitable to
emit a notification for "peer" property whenver the device's "parent"
property changes. Because especially with freeze/thaw, we want to emit
both signals in the same batch, not first emit change signals for "parent",
and then in a second run the signals for "peer".
Use the existing parent_changed_notify() virtual function instead.
NAP connections are a bit special, in that they also have a [bridge]
setting, but their connection.type is "bluetooth".
The canonical way to check whether a bluetooth connection is of NAP type
is by calling _nm_connection_get_setting_bluetooth_for_nap().
So, instead of checking for bluetooth.type "pan" or "dun", check the
opposite and whether the connection is of NAP type. In practice it's the
same, but let'check for NAP consistently via get_setting_bluetooth_for_nap().
Bluetooth tethering using DUN or PANU is a common way to expose a
metered 3G or 4G connection from a phone to a laptop. We deliberately
ignore NAP connections, which is where we’re sharing internet from the
laptop to another device.
We could also set GUESS_YES for WiMAX connections, but NetworkManager
doesn’t support them any more. Add a comment about that.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
https://bugzilla.gnome.org/show_bug.cgi?id=794120
If SSID is an empty string there's no need to call nm_wifi_ap_set_ssid
as it won't do anything. It also has an assert checking that NULL is
passed for an empty SSID and we were passing a non-NULL pointer.
This bug was not causing a crash for me because of the !IS_NM_DEVICE_IWD
check and because my glib version probably had the assertion within
NM_IWD_MANAGER_GET_PRIVATE disabled.
While there, change the g_signal_connect line to use the macro for the
signal name.
Make sure .set_enabled uses the Device.Powered property to basically
bring the netdev UP and DOWN as I understand is expected by the
nm_device logic.
Device.Powered should generally reflect the UP state immediately but
just to avoid possible race conditions .is_available() will now return
a value that is an AND of the local "enabled" state and IWD's Powered
property.
Change from the default dbus call timeout (-1) to infinite (G_MAXINT)
because the call may now include the secret requests which have their
own timeout policies.
Remove the code (mostly copied from nm-device-wifi.c) that handles
checking if the secrets were provided and requesting missing secrets
before starting a connection attempt. Instead, request secrets when
we're asked for them by IWD through its agent interface. This happens
while the dbus Connect call is running. We change the NMDevice from
the CONFIG state to NEED_AUTH and then change back to CONFIG once we
sent the secrets back to IWD.
The current code would require the secrets only based on whether a
network is a KnownNetwork but IWD may need a new passwords even for
KnownNetworks if the last connection attempt has failed.
Now that every call to nm_device_set_ip_iface() and nm_device_set_ip_ifindex()
is checked, and setting an interface that does not exist causes the device
state to fail, we no longer need to allow setting an ip-iface if we are
unable to retrieve the ip-ifindex.
Depending on the bearer's configuration method, the data-port is
either a networking interface, or an tty for ppp.
Let's treat them strictily separate.
Also, rework how NM_MODEM_DATA_PORT was used in both contexts.
Instead, use the that we actually care about.
Also, when nm_device_set_ip_ifindex() fails, fail activation
right away.
Also, we early try to resolve the network interface's name to
an ifindex. If that fails, the device is already gone and we
fail early.
nm_device_modem_new() is only called with a newly created
NMModemBroadband or NMModemOfono instance.
See the callers
- NMModemManager:handle_new_modem()
- NMWwanFactory:modem_added_cb()
- NMDeviceModem:nm_device_modem_new()
Hence, at that point, the modem cannot yet have a data-port
or ip-iface set, because that is only obtained later.
- don't even bother to look into the platform cache, but use
if_indextoname() / if_nametoindex(). In most cases, we obtained
the ifindex/ifname not from the platform cache in the first
place. Hence, there is a race, where the interface might not
exist.
However, try to process events of the platform cache, hoping
that the cache contains an interface for the given ifindex/ifname.
- let set_ip_ifindex() and set_ip_iface() both return a boolean
value to indicate whether a ip-interface is set or not. That is,
whether we have a positive ip_ifindex. That seems more interesting
information, then to return whether anything changed.
- as before, set_ip_ifindex() can only clear an ifindex/ifname,
or error out without doing anything. That is different from
set_ip_iface(), which will also set an ifname if no ifindex
can be resolved. That is curreently ugly, because then ip-ifindex
and ip-iface don't agree. That shall be improved in the future
by:
- trying to set an interface that cannot be resolved shall
lead to a disconnect in any case.
- we shall make less use of the ip-iface and rely more on the
ifindex.
The error should be freed by callback functions, but only
_monitor_bridges_cb() actually does it. Simplify this by letting the
caller own the error.
Fixes: 830a5a14cb
Platform invokes change events while reading netlink events. However,
platform code is not re-entrant and calling into platform again is not
allowed (aside operations that do not process the netlink socket, like
lookup of the platform cache).
That basically means, we have to always process events in an idle
handler. That is not a too strong limitation, because we anyway don't
know the call context in which the platform event is emitted and we
should avoid unguarded recursive calls into platform.
Otherwise, we get hit an assertion/crash in nm-iface-helper:
1 raise()
2 abort()
3 g_assertion_message()
4 g_assertion_message_expr()
5 do_delete_object()
6 ip6_address_delete()
>>> 7 nm_platform_ip6_address_delete()
8 nm_platform_ip6_address_sync()
9 nm_ip6_config_commit()
10 ndisc_config_changed()
11 ffi_call_unix64()
12 ffi_call()
13 g_cclosure_marshal_generic_va()
14 _g_closure_invoke_va()
15 g_signal_emit_valist()
16 g_signal_emit()
>>> 17 nm_ndisc_dad_failed()
18 ffi_call_unix64()
19 ffi_call()
20 g_cclosure_marshal_generic()
21 g_closure_invoke()
22 signal_emit_unlocked_R()
23 g_signal_emit_valist()
24 g_signal_emit()
>>> 25 nm_platform_cache_update_emit_signal()
26 event_handler_recvmsgs()
27 event_handler_read_netlink()
28 delayed_action_handle_one()
29 delayed_action_handle_all()
30 do_delete_object()
31 ip6_address_delete()
32 nm_platform_ip6_address_delete()
33 nm_platform_ip6_address_sync()
>>> 34 nm_ip6_config_commit()
35 ndisc_config_changed()
36 ffi_call_unix64()
37 ffi_call()
38 g_cclosure_marshal_generic_va()
39 _g_closure_invoke_va()
40 g_signal_emit_valist()
41 g_signal_emit()
42 check_timestamps()
43 receive_ra()
44 ndp_call_eventfd_handler()
45 ndp_callall_eventfd_handler()
46 event_ready()
47 g_main_context_dispatch()
48 g_main_context_iterate.isra.22()
49 g_main_loop_run()
>>> 50 main()
NMPlatform already has a check to assert against recursive calls
in delayed_action_handle_all():
g_return_val_if_fail (priv->delayed_action.is_handling == 0, FALSE);
priv->delayed_action.is_handling++;
...
priv->delayed_action.is_handling--;
Fixes: f85728ecffhttps://bugzilla.redhat.com/show_bug.cgi?id=1546656
When NM quits it destroys all singletons including NMOvsdb, which
invokes callbacks for every pending method call. In the shutdown,
extra care must be taken to not access objects that are already in a
inconsistent state; for example here, the callback changes the device
state, and this causes an access to data that has already been
cleared:
#0 _g_log_abort (breakpoint=breakpoint@entry=1) at gmessages.c:554
#1 g_logv (log_domain=0x5635653b6817 "NetworkManager", log_level=G_LOG_LEVEL_CRITICAL, format=<optimized out>, args=args@entry=0x7fffb4b2c1e0) at gmessages.c:1362
#2 g_log (log_domain=log_domain@entry=0x5635653b6817 "NetworkManager", log_level=log_level@entry=G_LOG_LEVEL_CRITICAL, format=format@entry=0x7fbb3f58fa4a "%s: assertion '%s' failed") at gmessages.c:1403
#3 g_return_if_fail_warning (log_domain=log_domain@entry=0x5635653b6817 "NetworkManager", pretty_function=pretty_function@entry=0x5635653b6b00 <__func__.34463> "nm_device_factory_manager_find_factory_for_connection", expression=expression@entry=0x5635653b6719 "factories_by_setting") at gmessages.c:2702
#4 nm_device_factory_manager_find_factory_for_connection (connection=connection@entry=0x56356627e0e0) at src/devices/nm-device-factory.c:243
#5 nm_manager_get_connection_iface (self=0x563566241080 [NMManager], connection=connection@entry=0x56356627e0e0, out_parent=out_parent@entry=0x0, error=error@entry=0x0) at src/nm-manager.c:1458
#6 check_connection_compatible (self=<optimized out>, connection=0x56356627e0e0) at src/devices/nm-device.c:4679
#7 check_connection_compatible (device=0x56356647b1b0 [NMDeviceOvsInterface], connection=0x56356627e0e0) at src/devices/ovs/nm-device-ovs-interface.c:95
#8 _nm_device_check_connection_available (self=0x56356647b1b0 [NMDeviceOvsInterface], connection=0x56356627e0e0, flags=NM_DEVICE_CHECK_CON_AVAILABLE_NONE, specific_object=0x0) at src/devices/nm-device.c:12102
#9 nm_device_check_connection_available (self=self@entry=0x56356647b1b0 [NMDeviceOvsInterface], connection=0x56356627e0e0, flags=flags@entry=NM_DEVICE_CHECK_CON_AVAILABLE_NONE, specific_object=specific_object@entry=0x0) at src/devices/nm-device.c:12131
#10 nm_device_recheck_available_connections (self=self@entry=0x56356647b1b0 [NMDeviceOvsInterface]) at src/devices/nm-device.c:12238
#11 _set_state_full (self=self@entry=0x56356647b1b0 [NMDeviceOvsInterface], state=state@entry=NM_DEVICE_STATE_FAILED, reason=reason@entry=NM_DEVICE_STATE_REASON_OVSDB_FAILED, quitting=quitting@entry=0) at src/devices/nm-device.c:13065
#12 nm_device_state_changed (self=self@entry=0x56356647b1b0 [NMDeviceOvsInterface], state=state@entry=NM_DEVICE_STATE_FAILED, reason=reason@entry=NM_DEVICE_STATE_REASON_OVSDB_FAILED) at src/devices/nm-device.c:13328
#13 del_iface_cb (error=<optimized out>, user_data=0x56356647b1b0) at src/devices/ovs/nm-device-ovs-port.c:160
#14 _transact_cb (self=self@entry=0x5635662b9ba0 [NMOvsdb], result=result@entry=0x0, error=0x563566259a10, user_data=user_data@entry=0x5635662ff320) at src/devices/ovs/nm-ovsdb.c:1449
#15 ovsdb_disconnect (self=self@entry=0x5635662b9ba0 [NMOvsdb]) at src/devices/ovs/nm-ovsdb.c:1331
#16 dispose (object=0x5635662b9ba0 [NMOvsdb]) at src/devices/ovs/nm-ovsdb.c:1558
#17 g_object_unref (_object=0x5635662b9ba0) at gobject.c:3293
#18 _nm_singleton_instance_destroy () at src/nm-core-utils.c:138
#19 _dl_fini () at dl-fini.c:253
#20 __run_exit_handlers (status=status@entry=0, listp=0x7fbb3e1ad6c8 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true) at exit.c:77
#21 __GI_exit (status=status@entry=0) at exit.c:99
#22 main (argc=1, argv=0x7fffb4b2cc38) at src/main.c:468
Add a new error code to indicate to callbacks that we are quitting and
no further action must be taken. This is preferable to having
additional references because it allows us to free the resources owned
by callbacks immediately, while references can easily create loops.
https://bugzilla.redhat.com/show_bug.cgi?id=1543871
Example: when dhcpv4 lease renewal fails, if ipv4.may-fail was "yes",
check also if we have a successful ipv6 conf: if not fail.
Previously we just ignored the other ip family status.
Convert the string representation of ipv4.dhcp-client-id property already in
NMDevice to a GBytes. Next, we will support more client ID modes, and we
will need the NMDevice context to generate the client id.
GByteArray is a mutable array of bytes. For every practical purpose, the hwaddr
property of NMDhcpClient is an immutable sequence of bytes. Thus, make it a
GBytes.
Avoid calling nm_device_iwd_set_dbus_object (device, NULL) if the
dbus_object was NULL already. Apparently gdbus guarantees that a
name-owner notification either has a NULL old owner or a NULL new owner
but can also have both old and new owner NULL.
Reuse the apparent workaround from libnm/nm-client.c in which the
GDbusObjectManagerClient is recreated every time the name owner
pops up, instead of creating it once and using that object forever.
Resubscribe to all the signals on the new object. The initial
GDbusObjectManager we create is only used to listed for the name-owner
changes.
There's nothing in gdbus docs that justifies doing that but there
doesn't seem to be any way to reliably receive all the signals from
the dbus service the normal way. The signals do appear on dbus-monitor
and the gdbus apparently subscribes to those signals with AddMatch()
correctly but they sometimes won't be received by the client code,
unless this workaround is applied.
While making changes to got_object_manager, don't destroy the
cancellable there as it is supposed to be used throughout the
NMIwdManager life.
Disconnect from NMManager signals in our cleanup, make sure the
NMManager singleton is not destroyed before we are by keeping a
reference until we've disconnected from its signals.
Add very simple periodic scanning because IWD itself only does periodic
scanning when it is in charge of autoconnecting (by policy). Since we
keep IWD out of the autoconnect state in order to use NM's autoconnect
logic, we need to request the scanning. The policy in this patch is to
use a simple 10s period between the end of one scan the requesting of
another while not connected, and 20s when connected. This is so that
users can expect similar results from both wifi backends but without
duplicating the more elaborate code in the wpa_supplicant backend which
can potentially be moved to a common superclass.
While the numerical values of IFA_F_SECONDARY and IFA_F_TEMPORARY
are identical, their meaning is not.
IFA_F_SECONDARY is only relevant for IPv4 addresses, while
IFA_F_TEMPORARY is only relevant for IPv6 addresses.
IFA_F_TEMPORARY is automatically set by kernel for the addresses
that it generates as part of IFA_F_MANAGETEMPADDR. It cannot be
actively set by user-space.
IFA_F_SECONDARY is automatically set by kernel depending on the order
in which the addresses for the same subnet are added.
This essentially reverts 8b4f11927 (core: avoid IFA_F_TEMPORARY alias for
IFA_F_SECONDARY).
In device_ipx_changed() we remember the addresses for which it appears
that DAD failed. Later, on an idle handler, we process them during
queued_ip6_config_change().
Note that nm_plaform_ip6_address_sync() might very well decide to remove
some or all addresses and re-add them immidiately later. It might do so,
to get the address priority/ordering right. At that point, we already
emit platform signals that the device disappeared, and track them in
dad6_failed_addrs.
Hence, later during queued_ip6_config_change() we must check again
whether the address is really not there and not still doing DAD.
Otherwise, we wrongly claim that DAD failed and remove the address,
generate a new one, and the same issue might happen again.
dad6_failed_addrs is populated with addresses from the platform cache.
Inside the cache, all addresses have addr_source NM_IP_CONFIG_SOURCE_KERNEL,
because addr_source property for addresses is only a property that is
used NetworkManager internally.