CC shared/nm-utils/libnm_core_libnm_core_la-nm-udev-utils.lo
In file included from ./shared/nm-utils/nm-glib.h:27:0,
from ./shared/nm-utils/nm-macros-internal.h:29,
from ./shared/nm-default.h:178,
from shared/nm-utils/nm-udev-utils.c:21:
shared/nm-utils/nm-udev-utils.c: In function ‘nm_udev_client_enumerate_new’:
./shared/nm-utils/gsystem-local-alloc.h:53:50: error: ‘to_free’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
GS_DEFINE_CLEANUP_FUNCTION(void*, gs_local_free, g_free)
^~~~~~
shared/nm-utils/nm-udev-utils.c:147:18: note: ‘to_free’ was declared here
gs_free char *to_free;
^~~~~~~
In file included from ./shared/nm-utils/nm-glib.h:27:0,
from ./shared/nm-utils/nm-macros-internal.h:29,
from ./shared/nm-default.h:178,
from shared/nm-utils/nm-udev-utils.c:21:
shared/nm-utils/nm-udev-utils.c: In function ‘nm_udev_client_new’:
./shared/nm-utils/gsystem-local-alloc.h:53:50: error: ‘to_free’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
GS_DEFINE_CLEANUP_FUNCTION(void*, gs_local_free, g_free)
^~~~~~
shared/nm-utils/nm-udev-utils.c:243:20: note: ‘to_free’ was declared here
gs_free char *to_free;
^~~~~~~
Fixes: e32839838e
(cherry picked from commit 0893c3756e)
../../src/nm-firewall-manager.c: In function ‘_handle_dbus_start’:
../../src/nm-firewall-manager.c:318:2: error: ‘dbus_method’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
g_dbus_proxy_call (priv->proxy,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
dbus_method,
~~~~~~~~~~~~
arg,
~~~~
Fixes: d8bf05d3e6
(cherry picked from commit 3ba614d696)
Since using libcurl for connectivity checks, we failed to build
with connectivity checking on travis. Fix that by installing the
required library (from trusty).
Fixes: 4e6967e33d
(cherry picked from commit 1f2b5a2032)
It's not the correct thing to do, but is the same behavior we've done
previously.
DAD is not even going to start until there's carrier and the client would
just wait indefinitely. Ideally, the client would choose not to waiat, but
it currently there's no way the client would discover what is going on.
https://bugzilla.redhat.com/show_bug.cgi?id=1446367
(cherry picked from commit bd9988f984)
connectivity check functionality should be explicitly set as disabled if
not wanted: this prevents building NM without connectivity check support
without any evident warning on systems where libcurl is not installed.
(cherry picked from commit 4e6967e33d)
Recently we removed libsoup dependency in favor of libcurl.
Connectivity checking functionality was enabled by defaut when libsoup
was detected: do the same now when detecting libcurl.
(cherry picked from commit ad35fbf3a3)
If users wrote a FQDN in ipv4.dhcp-hostname presumably it's because
they really want to send the full value, not only the host part, so
let's send it as-is.
This obviously is a change in behavior, but only for users that have a
FQDN in ipv4.dhcp-hostname, where it's not clear if they really want the
domain to be stripped.
When the property is unset, we keep sending only the host part of the
system hostname to maintain backwards compatibility.
This commit aligns NM behavior to initscripts.
(cherry picked from commit cf5fab8f55)
Since they are mutually exclusive, pass a string and a boolean to
indicate whether we want to use the hostname or the FQDN option.
(cherry picked from commit d286aa9dfa)
Commit #acf1067a allowed to assume connections on already managed
devices. Anyway, in complex scenario with layered connections, during
the startup of NetworkManager, this could interfere with the connection
assumption based on saved state.
So, avoid to re-assume connections on already managed devices during
startup.
Fixes: acf1067a45
(cherry picked from commit b6b7d909f7)
Set the @was_active flag for external activations with DHCP, so that
DHCP is retried multiple times in case of failure, as we do for
managed connections when the lease expires and for assumed
connections.
Fixes test: renewal_gw_after_dhcp_outage_for_assumed_var1
Fixes: e3113fdc4b
(cherry picked from commit ddfeed4530)
We check the return value of _get_stable_id(); when it is NULL
priv->ndisc would stay NULL too and we would crash when dereferencing
@error.
Actually, _get_stable_id() can never return NULL, so replace the check
with an assertion.
(cherry picked from commit 8b73812062)
src/devices/nm-device-ip-tunnel.c:257:8: warning: Branch condition evaluates to a garbage value
if (local4)
^~~~~~
src/devices/nm-device-ip-tunnel.c:264:8: warning: Branch condition evaluates to a garbage value
if (remote4)
^~~~~~~
(cherry picked from commit aaaefd827e)
Now that routes can include optional attributes, print the expected
syntax in case of parsing failure.
$ nmcli connection modify dummy ipv4.routes a
Error: failed to modify ipv4.routes: invalid route: Invalid IPv4
address 'a'. The valid syntax is: 'ip[/prefix] [next-hop] [metric]
[attribute=val]... [,ip[/prefix] ...]'.
(cherry picked from commit 00df57a066)
Most of the IPv6 methods require a non-tentative link local address
configured on the interface; we look at priv->ip6_config to determine
if such address exist. If the configuration is out-of-sync, we may
proceed with configuration when the link-local address does not exist
or is still tentative, especially because we toggle the "disable_ipv6"
sysctl parameter just before, which clears all IPv6 addresses on the
interface.
Ensure that priv->ext_ip6_config_captured is up-to-date before
continuing with the IPv6 configuration, and use it to determine
whether suitable addresses are present.
Fixes test: @ipv6_set_ra_announced_mtu
Fixes: 8f4caab601
(cherry picked from commit 0461da2690)
update_ip6_config() also removes addresses and routes no longer
present externally from the configuration, so it can't be called
before the changes are committed.
This reverts commit 8f4caab601.
(cherry picked from commit d626298b48)
Add an example python script to show and set setting's
user-data. This is useful, as nmcli still doesn't support
user data.
(cherry picked from commit 447c766f52)
The user data values are encoded in shell variables named
prefix "NM_USER_". The variable name is an encoded form of the
data key, consisting only of upper-case letters, digits, and underscore.
The alternative would be something like
NM_USER_1_KEY=my.keys.1
NM_USER_1_VAL='some value'
NM_USER_2_KEY=my.other.KEY.42
NM_USER_2_VAL='other value'
contary to
NM_USER_MY__KEYS__1='some value'
NM_USER_MY__OTHER___K_E_Y__42='other value'
The advantage of the former, numbered scheme is that it may be easier to
find the key of a user-data entry. With the current implementation, the
shell script would have to decode the key, like the ifcfg-rh plugin
does.
However, user data keys are opaque identifers for values. Usually, you
are not concerned with a certain name of the key, you already know it.
Hence, you don't need to write a shell script to decode the key name,
instead, you can use it directly:
if [ -z ${NM_USER_MY__OTHER___K_E_Y__42+x} ]; then
do_something_with_key "$NM_USER_MY__OTHER___K_E_Y__42"
fi
Otherwise, you'd first have to search write a shell script to search
for the interesting key -- in this example "$NM_USER_2_KEY", before being
able to access the value "$NM_USER_2_VAL".
(cherry picked from commit 79be44d990)
nm_setting_user_set_data() rejects invalid keys and values, and
can fail. This API is correct never to fail, like the get_data()
only returns valid user-data.
However, the g_object_set() API allows to set the hash directly but
it cannot report errors for invalid values. This API is used to
initialize the value from D-Bus or keyfile, hence it is wrong
to emit g_critial() assertions for untrusted data.
It would also be wrong to silently drop all invalid date, because
then the user cannot get an error message to understand what happend.
The correct but cumbersome solution is to remember the invalid values
separately, so that verify() can report the setting as invalid.
(cherry picked from commit 1dbbf6fb03)
vpn.data, bond.options, and user.data encode their values directly as
keys in keyfile. However, keys for GKeyFile may not contain characters
like '='.
We need to escape such special characters, otherwise an assertion
is hit on the server:
$ nmcli connection modify "$VPN_NAME" +vpn.data 'aa[=value'
Another example of encountering the assertion is when setting user-data key
with an invalid character "my.this=key=is=causes=a=crash".
(cherry picked from commit 8ef57d0f7e)
Most of the IPv6 methods require a non-tentative link local address
configured on the interface; we look at priv->ip6_config to determine
if such address exist. If the configuration is out-of-sync, we may
proceed with configuration when the link-local address does not exist
or is still tentative, especially because we toggle the "disable_ipv6"
sysctl parameter just before, which clears all IPv6 addresses on the
interface.
Ensure that priv->ip6_config is up-to-date before continuing with the
IPv6 configuration.
Fixes test: @ipv6_set_ra_announced_mtu
(cherry picked from commit 78b43f7ea1)
nm_device_update_firewall_zone() would only reconfigure the firewall
zone when the device is fully activated. That means, while the device
is activating, changing the firewall zone is not working. Activation
might take a long time with DHCP, or with master devices waiting
for their slaves.
For example:
nmcli connection add type team con-name t-team ifname i-team autoconnect no
nmcli connection up t-team
Note how t-team/i-team is waiting for a slave device. During stage3,
we already set firewall.zone to default.
nmcli connection modify t-team connection.zone external
Note how changing the firewall zone does not immidiately take
effect. Only later, during IP_CHECK state the firewall zone
is reset -- but only for devices with differing ip_ifindex.
https://bugzilla.redhat.com/show_bug.cgi?id=1445242
(cherry picked from commit 20ccbb97d5)
For regular devices that don't have a separate ip_iface/ip_ifindex,
the ip_ifindex is left at zero. Hence, the condition is always
true and does not work as intended, resulting in setting the
firewall zone twice.
Fixes: 7cf5c326bc
(cherry picked from commit baa8b4029c)
Commit 850c97795 ("device: track system interface state in NMDevice")
introduced interface states for devices and prevented checking if a
connection should be assumed on already managed devices.
This prevented to properly manage the event of an ip configuration added
externally to NM to a managed but not (yet) activated device.
Fixes: 850c977953
(cherry picked from commit acf1067a45)
When a DHCP connection is active and the DHCP server is temporarily
unreachable, we restart DHCP for some times before failing the
connection. From the user point of view, restarting NM (and thus
assuming the existing connection) should not change this behavior.
However, if NM is restarted while the server is temporarily down, at
the moment we immediately fail because we consider the DHCP
transaction our first try. Fix this by restoring the multiple tries
when we detect that DHCP was active before because the connection is
assumed.
(cherry picked from commit e3113fdc4b)
If we don't have connection checking functionality just avoid adding
a penalty to the defaut route of newly activated connections.
(cherry picked from commit 2524a6f852)
We call nm_device_activate_stage3_ipX_start() in various places,
e.g. after a carrier change or when a master enslaves a new device to
configure IP for the device. If the device is a slave in state
IP_CONFIG, this makes it transition to IP_CHECK, while it should stay
in IP_CONFIG until the master becomes ready. When the master is ready,
it will move slaves directly to SECONDARIES, skipping IP configuration
entirely.
(cherry picked from commit 41f6540afd)
When the operation is cancelled, we must not touch user_data. Note that
NM_POLICY_GET_PRIVATE() theoretically doesn't dereference the pointer
(does it?) but doing pointer arithmetic on a dangling pointer is a very
ugly thing to do.
And of course, the memleak.
Fixes: 5c716c8af8
Fixes: a2cdf63204
(cherry picked from commit 3215508293)
The default timeout in dhclient is 60 seconds; if a lease can't be
obtained during such interval, dhclient sends to NM a FAIL event and
then the IP method fails.
Thus, even if user specified a greater dhcp-timeout, NM terminated
DHCP after 60 seconds. Fix this by passing an explicit timeout to
dhclient.
(cherry picked from commit 82ef497cc9)
py-kickstart writes this out and there apparently are users using this.
Let them have one less problem.
Co-Authored-By: Thomas Haller <thaller@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1445414
(cherry picked from commit dbe0659ba419a77ad5ff2340bfc93c71a1bec61a)
py-kickstart writes this out. Okay -- we don't care on read and it makes
sense when there actually are addresses.
https://bugzilla.redhat.com/show_bug.cgi?id=1445414
(cherry picked from commit aa50dfc236b3806c6d7161cdea450655a1268f0d)
For example, if you want to test whether a value is present and
reset it to a different value (only if it is present), it would
be reasonable to do
if (svGetValue (s, key, &tmp)) {
svSetValue (s, key, "new-value");
g_free (tmp);
}
Without this patch, you could not be sure that key is not
set to some inparsable value, which svWriteFile() would then
write out as empty string.
Have invalid values returned by svGetValue() as empty string.
That is how svWriteFile() treats them.
(cherry picked from commit 6470bed5f1ad25e20df14b333f1b083c9b390ece)
The dad_counter is hashed into the resulting address. Since we
want the hashing to be independent of the architecture, we always
hash 32 bit of dad_counter. Make the dad_counter argument of
type guint32 for consistency.
In practice this has no effect because:
- for all our (current!) architectues, guint is the same as
guint32.
- all callers of nm_utils_ipv6_addr_set_stable_privacy() keep
their dad-counter argument as guint8, so they never even pass
numbers larger then 255.
- nm_utils_ipv6_addr_set_stable_privacy() limits dad_counter
further against RFC7217_IDGEN_RETRIES.
(cherry picked from commit 951e5f5bf8)
https://tools.ietf.org/html/rfc7217 says:
The resulting Interface Identifier SHOULD be compared against the
reserved IPv6 Interface Identifiers [RFC5453] [IANA-RESERVED-IID]
and against those Interface Identifiers already employed in an
address of the same network interface and the same network
prefix. In the event that an unacceptable identifier has been
generated, this situation SHOULD be handled in the same way as
the case of duplicate addresses (see Section 6).
In case of conflict, this suggests to create a new address incrementing
the DAD counter, etc. Don't do that. If we generate an address of the
reserved region, just rehash it right away. Note that the actual address
anyway appears random, so this re-hashing is just as good as incrementing
the DAD counter and going through the entire process again.
Note that now we no longer generate certain addresses like we did
previously. But realize that we now merely reject (1 + 16777216 + 128)
addresses out of 2^64. So, the likelyhood of of a user accidentally
generating an address that is suddenly rejected is in the order of
10e-13 (1 / 1,099,503,173,697). Which is not astronomically, but still
extreeeemely unlikely.
Also, the whole process is anyway build on the idea that somebody else
might generate conflicting addresses (DAD). It means, there was always
the extremely tiny chance that the address you generated last time is
suddenly taken by somebody else. So, this change appears to a user
like these reserved addresses are now claimed by another (non existing)
host and a different address gets generated -- business as usual, as
far as SLAAC is concerned.
(cherry picked from commit f15c4961ad)