Note that we should always set the source-tag of our GTask.
This allows us to better assert that the user uses the right
_finish() method for the task.
The plain g_task_new() does not have a souce-tag argument. Hence, we would
always need to explicitly call g_task_set_source_tag().
Likewise, to check the source tag, we would always need to write
g_return_val_if_fail (g_task_is_valid (result, self), FALSE);
g_return_val_if_fail (g_async_result_is_tagged (result, tag), FALSE);
Actually, g_async_result_is_tagged() uses the GAsyncResultIface to
call iface->is_tagged(). This has unnecessary overhead, so we should
just call g_task_get_source_tag() directly.
Add helper functions for that.
nm_client_add_and_activate_connection_async() must be completed by
nm_client_add_and_activate_connection_finish().
Fixes: be8060f42f ('libnm: add an object-creation-failed test')
Note that the server always returns TRUE for the boolean return value
of ReloadConnections. Hence, this should not change in behavior, because
the server would never have returned FALSE.
However, change behavior of the API. It's odd that the function might
return %FALSE without setting the error output. It's also not clear
what the boolean value of the "ReloadConnections" D-Bus would mean
anyway.
nm_remote_settings_load_connections() and nm_remote_settings_load_connections_async()
behave inconsistently.
It's unexpected, that a FALSE return value may leave @error unset.
Note that before commit 22e830f046 ('settings/d-bus: fix boolean
return value of "LoadConnections"'), the server boolean response
would have been bogus anyway (at least for some versions).
Unify the behavior, and ignore the boolean return value.
A function that accepts a floating variant must consume it.
Fixes: 7691fe5753 ('libnm: add new functions allowing passing options to RequestScan() D-Bus call')
If the 802.1X authentication fails and 802-1x.optional is set,
continue with activation. In this case, subscribe to the auth-state
supplicant property so that any dynamic IP method can be restarted
when the authentication succeeds. This is because upon authentication
the switch could have changed the VLAN we are connected to.
Refactor reading the phase2 authentication method for 802.1X.
Previously the reader only considered the first item of the
space-separated list; but since the 802.1x setting can hold distinct
phase2-auth and phase2-autheap properties - both mapped to the same
ifcfg-rh variable - we should parse the whole list. We only emit a
warning when multiple methods of the same type are found to avoid
breaking existing manually written ifcfg files.
Moreover, the reader implemented different checks for each of the
outer tunneled methods (PEAP, TTLS and FAST); drop those checks and
accept whatever the 802.1X setting also consider as valid. Note that
some combinations that are in principle valid, like PEAP + EAP-MD5,
were dropped before.
The pending action gets logged. We should not log plain pointer
values because they may be used to defeat ASLR.
Instead, construct the pending action using the "version_id". This
number is also unique, and suits sufficiently well. With debug logging
you can still grep the log for the corresponding active-connection (and
anyway it's obvious from the context).
These "pending-actions" only have one purpose: to mark the device
as busy and thereby delay "startup complete" to be reached. That
in turn delays "NetworkManager-wait-online" service.
Of course, "NetworkManager-wait-online" waits for some form of readiness
and is not extensively configurable (e.g. you cannot exclude devices from
being waited). However, the intent is to wait that all devices are "settled".
That means among others, that the timeouts waiting for carrier and Wi-Fi scan
results passed, and devices either don't have a connection profile to autoactivate,
or they autoactivated profiles and are in state "connected".
A major point here is that the device is considered ready, once it
reaches the state "connected". Note that if you configure both IPv4 and
IPv6 addressing modes, than "ipv4.may-fail=yes" and "ipv6.may-fail=yes"
means, that the device is considered fully activated once one address
family completes. Again, this is not very configurable, but by setting
"ipv6.may-fail=no", you can require that the device has indeed IPv6
addressing completed.
Now, the determining factor for declaring "startup complete" is whether the
device is in state "connected". That may or may not mean that DHCPv4,
autoconf or DHCPv6 completed, as it depends on a overall state of the
device. So, it is wrong to have distinct pending actions for these operations.
Remove them.
This fixes that we wrongly would wait too long before declaring startup
complete. But it is also a change in behavior.
The default timeout is really only here to abort the test when it
definitely failed to avoid keeping it hanging forever. It's not part
of any strict assertions and should be large.
Fixes: bb4b749595 ('clients/tests: don't wait for first job before scheduling parallel jobs')
Previously, the test would kick off 15 processes in parallel, but
the first job in the queue would block more processes from being
started.
That is, async_start() would only start 15 processes, but since none of
them were reaped before async_wait() was called, no more than 15 jobs
were running during the start phase. That is not a real issue, because
the start phase is non-blocking and queues all the jobs quickly. It's
not really expected that during that time many processes already completed.
Anyway, this was a bit ugly.
The bigger problem is that async_wait() would always block for the
first job to complete, before starting more processes. That means,
if the first job in the queue takes unusually long, then this blocks
other processes from getting reaped and new processes from being
started.
Instead, don't block only one one jobs, but poll them in turn for a
short amount of time. Whichever process exits first will be completed
and more jobs will be started.
In fact, in the current setup it's hard to notice any difference,
because all nmcli invocations take about the same time and are
relatively fast. That this approach parallelizes better can be seen
when the runtime of jobs varies stronger (and some invocations take
a notably longer time). As we later want to run nmcli under valgrind,
this probably will make a difference.
An alternative would be not to poll()/wait() for child processes,
but somehow get notified. For example, we could use a GMainContext
and watch child processes. But that's probably more complicated
to do, so let's keep the naive approach with polling.
We try to set only one time the MTU from the connection to not
interfere with manual user changes.
If at some point the parent interface changes temporarily MTU to a
lower value (for example, because the connection was reactivated), the
kernel will also lower the MTU on child interface and we will not
update it ever again.
Add a workaround to this. If we detect that the MTU we want to set
from connection is higher that the allowed one, go into a state where
we follow the parent MTU until it is possible to set again the desired
MTU. This is a bit ugly, but I can't think of any nicer way to do it.
https://bugzilla.redhat.com/show_bug.cgi?id=1751079
A MACsec connection doesn't have an ordering dependency with its
parent connection and so it's possible that the parent gets activated
later and sets a greater MTU than the original one.
It is reasonable and useful to keep the MACsec MTU configured by
default as the maximum allowed by the parent interface, that is the
parent MTU minus the encapsulation overhead (32). The user can of
course override this by setting an explicit value in the
connection. We already do something similar for VLANs.
https://bugzilla.redhat.com/show_bug.cgi?id=1723690
Introduce a generic function to set a MTU based on parent's one. Also
define a device-specific @mtu_parent_delta value that specifies the
difference from parent MTU that should be set by default. For VLAN it
is zero but other interface types (for example MACsec) require a
positive value due to encapsulation overhead.
Since commit 159ff23268 ('dhcp/dhclient-utils: skip over
dhclient.conf blocks') we skip blocks enclosed in lines containing '{'
and '}' because NM should ignore 'lease', 'alias' and other
declarations. However, conditional statements seem useful and should
not be skipped.
https://bugzilla.redhat.com/show_bug.cgi?id=1758550
PMF can be used with SAE, allow it. Actually, it is required according
to WPA3 specifications but there are implementations that don't
require it (hostapd can be configured in a such way); so let's not
make it mandatory for WPA3.
Fixes: 6640fb4b36 ('supplicant: add support for SAE key management')
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/issues/257
The script is run for every dispatcher event. Most of the events are not
actually relevant, and we just need to determine that there is nothing
to do and quit.
Avoid calling "dirname" and "basename".
The supported ifcfg-file has a very specific form. We can just check
(and parse) it in one got regular expression in bash.
A shell script is executed line-by-line. Note that for most dispatcher
events, "10-ifcfg-rh-routes.sh" has nothing to do and will just quit.
Move those checks earlier, to avoid bash executing the code that won't
be needed most of the time.
found by cppcheck
[src/devices/nm-device.c:3032] -> [src/devices/nm-device.c:3025]: (warning) Either the condition '!handle' is redundant or there is possible null pointer dereference: handle.
https://github.com/NetworkManager/NetworkManager/pull/352
The previous commit marks all synchronous libnm API as deprecated.
In practice, the macro _NM_DEPRECATED_SYNC_METHOD expands to
nothing, because there is no immediate urgency to force users
to migrate.
However nm_client_check_connectivity() is especially bad: it
makes a synchronous call and then updates the content of the
cache artificially. Usually, NMClient's cache of D-Bus objects
is only updated by "PropertiesChanged" D-Bus signals.
nm_client_check_connectivity() instead will act on the response to
the "CheckConnectivity" D-Bus call -- a response that is picked
out of order from the ordered sequence of messages -- and will
update the cache instead of honoring the usual "PropertiesChanged"
signal.
I think such behavior is fundamentally broken. For a trivial property like
NM_CLIENT_CONNECTIVITY such behavior is odd at best. Note how applying
this approach to other functions (like nm_client_deactivate_connection(),
which would affect a much larger state) would not be feasible.
I also imagine it to be complicate to preserve this behavior when
reworking libnm, as I plan to do.
See also commit b799de281b ('libnm: update property in the manager
after connectivity check'), which introduced this behavior to "fix"
bgo#784629.