We already have Update(), UpdateUnsaved() and Save(), which serve
similar purposes. We will need a form of update with another argument.
Most notably, to block autoconnect while doing the update.
Other use cases could be to prevent reapplying connection.zone and
connection.metered, to to reapply all changes.
Instead of adding a specific update function that only serves that
new use-case, add a extensible Update2() function. It can be extended
to cope with future variants of update.
(cherry picked from commit 98ee18d888)
The wireless-security setting has a 'wep-key-type' property that is
used to specify the WEP key type and is needed because some keys could
be interpreted both as a passphrase or a hex/ascii key.
The ifcfg-rh plugin currently stores the key type implicitly: if
wep-key-type is 'passphrase' it uses the KEY_PASSPHRASE%d variable, if
it's 'key' the KEY%d variable and when it's 'unknown' it uses either
variables depending on the detected type (preferring 'key' in case
both are compatible).
This means that some connections will be read differently from how
they were written, because once the KEY (or KEY_PASSPHRASE) is read
there is no way to know whether the 'wep-key-type' property was 'key'
(or 'passphrase') or 'unknown'.
Fix this by persisting the key type explicitly in the file. The new
variable is redundant in most cases because the variables used for
keys also determine the key type.
https://bugzilla.redhat.com/show_bug.cgi?id=1518177
(cherry picked from commit c6eb18ee05)
Until now the ifcfg-rh plugin merged the values of 'ipv4.dns-options'
and 'ipv6.dns-options' and wrote the result to the RES_OPTIONS
variable. This is wrong because writing a connection and reading it
back gives a different connection compared to the original.
This behavior existed since when DNS options were introduced, but it
became more evident now that we reread the connection after write,
because after doing a:
$ nmcli connection modify ethie ipv4.dns-options ndots:2
the connection has both ipv4.dns-options and ipv6.dns-options set. In
order to delete the option, an user has to delete it from both
settings:
$ nmcli connection modify ethie ipv4.dns-options "" ipv6.dns-options ""
To improve this let's use different variables for IPv4 and IPv6. To
keep backwards compatibility IPv4 still uses RES_OPTIONS, while IPv6
uses a new IPV6_RES_OPTIONS variable.
https://bugzilla.redhat.com/show_bug.cgi?id=1517794
(cherry picked from commit 8379785560)
We should use the same str2bool parser everywhere: _nm_utils_ascii_str_to_bool().
Incidentally, this function allows more forms of expressing a boolean
value.
$ nmcli connection modify "$CON" ipv4.routes '1.2.3.4/32 1.2.3.1 onlink=1'
Error: failed to modify ipv4.routes: invalid option 'onlink=1': invalid boolean value '1' for attribute 'onlink'.
(cherry picked from commit 26e7abc65e)
Currently both bridge.mac-address and ethernet.cloned-mac-address get
written to the same MACADDR ifcfg-rh variable; the ethernet property
wins if both are present.
When one property is set and the connection is saved (and thus reread)
both properties are populated with the same value. This is wrong
because, even if the properties have the same meaning, the setting
plugin should not read something different from what was written. Also
consider that after the following steps:
$ nmcli con mod c ethernet.cloned-mac-address 00:11:22:33:44:55
$ nmcli con mod c ethernet.cloned-mac-address ""
the connection will still have the new mac address set in the
bridge.mac-address property, which is certainly unexpected.
In general, mapping multiple properties to the same variable is
harmful and must be avoided. Therefore, let's use a different variable
for bridge.mac-address. This changes behavior, but not so much:
- connections that have MACADDR set will behave as before; the only
difference will be that the MAC will be present in the wired
setting instead of the bridge one;
- initscripts compatibility is not relevant because MACADDR for
bridges was a NM extension;
- if someone creates a new connection and sets bridge.mac-address NM
will set the BRIDGE_MACADDR property instead of MACADDR. But this
shouldn't be a big concern as bridge.mac-address is documented as
deprecated and should not be used for new connections.
https://bugzilla.redhat.com/show_bug.cgi?id=1516659
(cherry picked from commit fb191fc282)
Since the order was arbitrary before, we can also sort it.
Also rework it, to avoid the creating a temporary GList of keys.
(cherry picked from commit d5b3c6ee53)
Matters when backslash escaping ascii charaters <= 0xF, to
produce "\\XX" instead of "\\ X". For example tabulator is "\\09".
This also can trigger an nm_assert() failure, when building with
--with-more-asserts=5 (or higher).
(cherry picked from commit 89c89143b5)
Kernel doesn't support it for IPv6.
This is especially useful, if you combine static routes
with DHCP. In that case, you might want to get the device-route
to the gateway automatically, but add a static-route for it.
(cherry picked from commit 0ed49717ab)
The number of authentication retires is useful also for passwords aside
802-1x settings. For example, src/devices/wifi/nm-device-wifi.c also has
a retry counter and uses a hard-coded value of 3.
Move the setting, so that it can be used in general. Although it is still
not implemented for other settings.
This is an API and ABI break.
When the ifcfg-rh plugin writes a 802-1x setting it currently ignores
the password-raw property and so the password disappears when the
connection is saved. Add support for the property.
Normalizing can be complicated, as settings depend on each other and possibly
conflict.
That is, because verify() must exactly anticipate whether normalization will
succeed and how the result will look like. That is because we only want to
modify the connection, if we are sure that the result will verify.
Hence, verify() and normalize() are strongly related. The implementation
should not be spread out between NMSettingOvsInterface:verify(),
NMSettingOvsPatch:verify() and _normalize_ovs_interface_type().
Also, add some unit-tests.
There is no API to get all settings. You can only ask for
settings explicitly, but that requires you to probe for them
and know which ones may exist.
The alternative API might be nm_connection_for_each_setting_value(),
but that only iterates over settings' properties. If a setting has no
properties, it is ignored.
We want to support large number of routes. Reduce the number
of copies, by adding internal accessor functions.
Also, work around a complaint from coverity:
46. NetworkManager-1.9.2/libnm-core/nm-utils.c:1987:
dereference: Dereferencing a null pointer "names".
Previously, nm_setting_diff() would return !(*results), that means,
if the caller passed in a hash table (empty or not), the return value
would always be FALSE, indicating a difference.
That is not documented, and makes no sense.
The return value, should solely indicate whether some difference was
found. The only convenience is, if nm_setting_diff() created a hash
table internally and no difference was found, it would destroy
it again, without returning it to the caller.
NMSettingGeneric has no properties at all. Hence, nm_connection_diff() would report that
a connection A with a generic setting and a connection B without a generic setting are
equal.
They are not. For empty settings, let nm_setting_diff() return also empty difference
hash.
Since kernel commit a4176a9391868bfa87705bcd2e3b49e9b9dd2996 (net:
reject creation of netdev names with colons), kernel rejects any
colons in the interface name.
Since kernel could get away with tightening up the check, we can
too.
The user anyway can not choose arbitrary interface names, like
"all", "default", "bonding_masters" are all going to fail one
way or another.
teamd adds the "tx_hash" property for "lacp" and "loadbalance" runners
when not present. Do the same so that our original configuration
matches with the one reported by teamd.
https://bugzilla.redhat.com/show_bug.cgi?id=1497333
We often want to cascade hashing, meaning, to combine the
outcome of various hash functions in a larger hash.
Instead of having each hash function return a guint hash value,
accept a hash state argument. This saves the overhead of initializing
and completing the intermediate hash states.
It also avoids loosing entropy when we reduce the larger hash state
into the intermediate guint hash value.
By using a macro, we don't cast all the types to guint. Instead,
we use their native types directly. Hence, we don't need
nm_hash_update_uint64() nor nm_hash_update_ptr().
Also, for types smaller then guint like char, we save hashing
the all zero bytes.
siphash24() is wildly used by projects nowadays.
It's certainly slower then our djb hashing that we used before.
But quite likely it's fast enough for us, given how wildly it is
used. I think it would be hard to profile NetworkManager to show
that the performance of hash tables is the issue, be it with
djb or siphash24.
Certainly with siphash24() it's much harder to exploit the hashing
algorithm to cause worst case hash operations (provided that the
seed is kept private). Does this better resistance against a denial
of service matter for us? Probably not, but let's better be safe then
sorry.
Note that systemd's implementation uses a different seed for each hash
table (at least, after the hash table grows to a certain size).
We don't do that and use only one global seed.
The privious NM_HASH_* macros directly operated on a guint value
and were thus close to the actual implementation.
Replace them by adding a NMHashState struct and accessors to
update the hash state. This hides the implementation better
and would allow us to carry more state. For example, we could
switch to siphash24() transparently.
For now, we still do a form basically djb2 hashing, albeit with
differing start seed.
Also add nm_hash_str() and nm_str_hash():
- nm_hash_str() is our own string hashing implementation
- nm_str_hash() is our own string implementation, but with a
GHashFunc signature, suitable to pass it to g_hash_table_new().
Also, it has this name in order to remind you of g_str_hash(),
which it is replacing.
"nm-utils/nm-shared-utils.h" shall contain utility function without other
dependencies. It is intended to be used by other projects as-is.
nm_utils_random_bytes() requires getrandom() and a HAVE_GETRANDOM configure
check. That makes it more cumbersome to re-use "nm-shared-utils.h", in
cases where you don't care about nm_utils_random_bytes().
Split nm_utils_random_bytes() out to a separate file.
Same for hash utils, which depend on nm_utils_random_bytes(). Also, hash
utils will eventually be extended to use siphash24.
Introduce a NM_HASH_INIT() function. It makes the places
where we initialize a hash with a certain seed visually clear.
Also, move them from "shared/nm-utils/nm-shared-utils.h" to
"shared/nm-utils/nm-macros-internal.h". We might want to
have NM_HASH_INIT() non-inline (hence, define it in the
source file).
So that the man page will display:
The permitted values are: NM_SETTING_IP6_CONFIG_ADDR_GEN_MODE_EUI64
(0) or NM_SETTING_IP6_CONFIG_ADDR_GEN_MODE_STABLE_PRIVACY (1).
instead of
The permitted values are: "eui64" or "stable-privacy".
since the latter is not useful at all for a int32 property.
Unfortunately the enum names are quite long and don't look very well
in a table, but that's another problem.