I hit an assertion failure running with valgrind on a busy machine.
Maybe the timeout is just not long enough for every case.
Increase it.
(cherry picked from commit 88c24ffc6a)
# random seed: R02S4ca8cfc3dace399c0f15b42411e45d2e
1..48
# Start of link tests
ok 1 /link/bogus
PASS: src/platform/tests/test-link-linux 1 /link/bogus
ok 2 /link/loopback
PASS: src/platform/tests/test-link-linux 2 /link/loopback
nmtst: initialize nmtst_get_rand() with NMTST_SEED_RAND=2697682474
ok 3 /link/internal
PASS: src/platform/tests/test-link-linux 3 /link/internal
ok 4 /link/external
PASS: src/platform/tests/test-link-linux 4 /link/external
# Start of software tests
./tools/run-nm-test.sh: line 193: 7589 Trace/breakpoint trap (core dumped) "${NMTST_DBUS_RUN_SESSION[@]}" "$TEST" "$@"
NMPlatformSignalAssert: src/platform/tests/test-link.c:298, test_slave(): failure to accept signal 0 times: 'link-changed-changed' ifindex 9 (1 times received)
ERROR: src/platform/tests/test-link-linux - too few tests run (expected 48, got 4)
ERROR: src/platform/tests/test-link-linux - exited with status 133 (terminated by signal 5?)
(cherry picked from commit 1ee6dea02f)
When building with assertions, they nm_assert() for the
type. Otherwise, they are identical to a C cast.
Also, where possible, don't cast at all, but adjust
the type instead.
Also, there were a few missing casts.
(cherry picked from commit 7661ad64ba)
(cherry picked from commit ceeeb51e1d)
Settings plugins now return the connection that was reread from file
when adding a connection, which means that any agent-owned secret is
lost. Ensure that we don't forget agent-owned secrets by caching them
and readding them to the new connection returned by plugins.
Fixes: 8a1d483ca8
Fixes: b4594af55ehttps://bugzilla.gnome.org/show_bug.cgi?id=789383
(cherry picked from commit 62141d59cb)
(cherry picked from commit 0bd8b34725)
The function should not close the input file descriptor; however
fdopen() associates the fd to the new stream so that when the stream
is closed, the fd is too. The result is a double close() and the
second call can in certain cases affect a wrong fd.
Use a duplicate fd for the stream.
Fixes: 1d9bdad1dfhttps://bugzilla.redhat.com/show_bug.cgi?id=1451236
(cherry picked from commit 597072296a)
The bus manager takes extra references to the GDBusConnection every
time g_dbus_object_manager_server_get_connection() its called,
preventing its disposal once the connection is closed. This causes a
leak for each DHCP event.
https://bugzilla.redhat.com/show_bug.cgi?id=1461643
(cherry picked from commit 5b81d40338)
If unrealize() failed we returned without thawing notify signals. Fix
this by moving g_object_freeze_notify() after the
unrealization/deletion but before the properties are reset in
unrealize_notify().
Fixes: a93807c288
(cherry picked from commit 24a7f88bc5)
Zero is a valid route metric and distinct from -1, which means unspecified.
Fix reader and writer.
Fixes: e374923bbe
(cherry picked from commit 099be8e4db)
Since commit 6845b9b80a ("device: delay
startup complete until device is initialized in platform", we also wait
for devices that are still initializing platform/UDEV.
Obviously, that only applies to realized devices.
Otherwise, an unrealized device is going to block startup complete.
Fixes: 6845b9b80a
(cherry picked from commit 9ad8010fe0)
the --timeout command line option is a custom feature added in some
linux distributions (fedora). Passing that command line argument will
make dhclient fail if the binary does not support it, preventing
activation of dhcp based connections.
Worse, the option has just been recently changed from "-timeout", so
that we are currently incompatibile with Centos, RedHat and older
versions of Fedora too.
Leverage the "timeout" option in dhclient config file: it will produce
the expected behavior and will be universally supported.
Fixes test: dhcp-timeout
Fixes: fa46736013https://bugzilla.redhat.com/show_bug.cgi?id=1491243
(cherry picked from commit 1cb4832f09)
A typo in the new dhcp-timeout option caused the dhclient daemon to exit
with error when the dhcp-timeout option was specified.
This prevents dhcp connection to be upped.
Fixes: 82ef497cc9
(cherry picked from commit fa46736013)
When comparing a platform route with a route from configuration, we
must translate the value of rt_source.
This fixes CI test @ipv6_preserve_cached_routes
If the slave is 'external' we should never touch it, in particular we
should not release the link from its master; we only have to remove it
from master's list.
https://bugzilla.redhat.com/show_bug.cgi?id=1442361
(cherry picked from commit 981f90e324)
Previously, if a master device had internal state 'managed', we would
promote the slave to 'managed' as well. However,
- if the slave is 'external', it should stay as is because we don't
want to start managing it
- if the slave is 'assumed', it will become managed when the
activation succeeds, so it's not necessary to do it here
Fixes: 850c977953
(cherry picked from commit 9e99590508)
Software devices don't have a permanent hardware address and thus it
doesn't make sense to enforce the 'fake' (generated) permanent one
when cloned-mac-address=permanent. Also, setting the fake permanent
address on bond devices, prevents them from inheriting the first slave
hardware address, so let's just skip the setting of MAC when
cloned-mac-address=permanent and there is no real permanent address.
https://bugzilla.redhat.com/show_bug.cgi?id=1472965
(cherry picked from commit 2f4dfd0f2e)
The return value for the delete methods checks whether the object
is actually deleted. That is questionable behavior, because if the netlink
request succeeds, there is little point in checking with the platform cache.
As it is, it is racy.
Anyway, the previous value was totally wrong.
But it also uncovers another platform bug, which currently breaks
route tests. Will be fixed next.
(cherry picked from commit 5b09f7151b)
The DNS manager drops from the search list domains that are public
suffixes to prevent a possible domain hijack when using two-labels
hostnames [1].
This is a problem now that every single-label domain can be a TLD
since this means that such domains can't be used in the search list.
While it's useful to apply such restriction to the domain
automatically derived from the system hostname, it seems wrong to drop
domains specified by users in the configuration or provided by DHCP.
This commit keeps the public-suffix check only for the
hostname-derived domain
[1] https://bugzilla.redhat.com/show_bug.cgi?id=812394https://bugzilla.redhat.com/show_bug.cgi?id=1404350
(cherry picked from commit 5aa22ed8c9)
In commit d405cfd908, parsing "interface"
statement is introduced. But it leads to uncommplete parsing of the
"request" entry, if one of the lines in "request" entry is prefixed with
word "interface". For example, the default configuration of openSUSE
distribution:
request subnet-mask, broadcast-address, routers,
rfc3442-classless-static-routes,
interface-mtu, host-name, domain-name, domain-search,
domain-name-servers, nis-domain, nis-servers,
nds-context, nds-servers, nds-tree-name,
netbios-name-servers, netbios-dd-server,
netbios-node-type, netbios-scope, ntp-servers;
Fixes: d405cfd908https://bugzilla.opensuse.org/show_bug.cgi?id=1047004https://mail.gnome.org/archives/networkmanager-list/2017-July/msg00015.html
(cherry picked from commit 3646ed083d)
Since commit 0922a17738 ("manager: avoid that auto-activations
preempt user activations") the manager doesn't allow a internal
activation to disconnect the same connection already active on the
device. Thus, during a rollback we must ensure that the device is
deactivated before.
Fixes: 0922a17738
(cherry picked from commit 348959cfa2)
The default value for miimon, when missing in the setting, is 0 if
arp_interval is != 0, and 100 otherwise. So, when generating a
connection, let's ignore miimon=0 (which means that miimon is
disabled) and accept any other value. Adding miimon=100 does not cause
any harm to the connection assumption.
While at it, slightly improve the code: ignore_if_zero() is not useful
for 'updelay','downdelay','arp_interval' because zero is their default
value, so introduce a new function that checks if the value is the
default (and specially handles 'miimon').
Reported-by: Taketo Kabe <rkabe@vega.pgw.jp>
https://bugzilla.redhat.com/show_bug.cgi?id=1463077
(cherry picked from commit 92fc109183)
When a user forces up a connection on a device, mark earlier the
device as managed: this would allow proper clean-up on the device also
when it was previously unmanaged or assumed.
This would avoid skipping IPv6LL address generation when instead it was
needed.
Fixes: adbf383628https://bugzilla.redhat.com/show_bug.cgi?id=1452046
(cherry picked from commit d4a033c4ad)
In _internal_activate_device(), we try to find an existing master AC
for the slave AC, and we create a new one in case of failure. The
master AC may already exist, but it may not be detected by
find_master() because it is undergoing authorization.
The result is that we auto-activate the master when there is already a
user activation in place, and the auto-activation will cancel the user
one. This is bad, as user-activation should always have precedence.
To fix this, introduce a last-minute check before activating internal
connections.
https://bugzilla.redhat.com/show_bug.cgi?id=1450219
(cherry picked from commit 0922a17738)
No need to clone the list anymore. Unfortunately, GPtrArray is not NULL
terminated (without extra effort), so we have to pass on the GPtrArray
instance for the length.
(cherry picked from commit 19a98c6f61)
A negative ipv4.dns-priority and ipv6.dns-priority has the meaning to configure
the DNS information of the connection exclusively. With systemd-resolved, that means
we must explicitly unset the configuration from other interfaces.
https://bugzilla.gnome.org/show_bug.cgi?id=783569
(cherry picked from commit 70792e51d9)
The code was correct previously, but it was confusing to me,
because
- once @skip gets set to TRUE, it stays TRUE for the rest
of the loop.
- in each additional skipped iteration, it would still set
plugin_confs[i] to NULL. Which is not wrong, but confusing.
- it would set "prev_prio = prio;" in each iteration.
After @skip is set to TRUE, that doesn't matter anymore,
but is confusing. Before @skip is set to TRUE it also
doesn't really matter to set it more then once, because
we only care about the very first priority.
- @skip sounded to me like the current iteration would
be skipped. But really all remaining will be skipped too.
(cherry picked from commit aa347182bb)
For externally managed interfaces, we create an in-memory connection
and keep the device with sys-iface-state=external.
When the user actively modifies the connection, we persist it to
storage. But we also must take over managing the device.
One problem is that nm_device_reapply() errors out if the device
is still activating. It's unclear how to reapply the connection
while the device is in the process of activation. So, if the user
modifies the created connection very quickly, reapplying the settings
will fail.
https://bugzilla.redhat.com/show_bug.cgi?id=1462223
(cherry picked from commit 10c0632df0)
We react on changes to NMSettingsConnection.flags, so that we can update
from an external activation to a managed one.
However, previously we would only register the _settings_connection_notify_flags
callback during _set_settings_connection(). So, if via constructor properties
we first set PROP_SETTINGS_CONNECTION and later PROP_ACTIVATION_TYPE, we wouldn't
register the callback.
(cherry picked from commit b84da25713)
When IPv4 addresses are synchronized to platform, the order of IPv4
addresses matters because the first address is considered the primary
one. Thus, nm_ip4_config_capture() should put the primary address as
first, otherwise during synchronization addresses will be removed and
added back with a different primary/secondary role.
https://bugzilla.redhat.com/show_bug.cgi?id=1459813
(cherry picked from commit b6fa87a4c0)
Since commit 2b51d3967 "device: merge branch 'th/device-mtu-bgo777251'",
we always set the MTU for certain device types during activation. Even
if the MTU is neither specified via the connection nor other means, like
DHCP.
Revert that change. On activation, if nothing explicitly configures the
MTU, leave it unchanged. This is like what we do with ethernet's
cloned-mac-address, which has a default value "preserve".
So, as last resort the default value for MTU is now 0 (don't change),
instead of depending on the device type.
Note that you also can override the default value in global
configuration via NetworkManager.conf.
This behavior makes sense, because whenever NM actively resets the MTU,
it remembers the previous value and restores it when deactivating
the connection. That wasn't implemented before 2b51d3967, and the
MTU would depend on which connection was previously active. That
is no longer an issue as the MTU gets reset when deactivating.
https://bugzilla.redhat.com/show_bug.cgi?id=1460760
(cherry picked from commit 4ca3002b86)
- don't use assert but be more graceful with g_return_if_fail().
- in case of failure, don't log a debug message after the warning.
One message is sufficient, drop "pppd pid %d cleaned up".
- print GPid type as long long.
- increase log level to warning. pppd dying unexpectedly warrants a
warning.
(cherry picked from commit 250e723951)
ppp_exit_code() does too much or too little. Either it should log
about all reasons why pppd exited, including signals, or it should
just do the status to string conversion. Split it.
(cherry picked from commit 3f64910b52)