This is an API break since 1.16-rc1.
The functions like _nm_utils_wireguard_decode_key() are internal API
and not accessible to a libnm user. Maybe this should be public API,
but for now it is not.
That makes it cumbersome for a client to validate the setting. The client
could only reimplement the validation (bad) or go ahead and set invalid
value.
When setting an invalid value, the user can afterwards detect it via
nm_wireguard_peer_is_valid(), but at that point, it's not clear which
exact property is invalid.
First I wanted to keep the API conservative and not promissing too much.
For example, not promising to do any validation when setting the key.
However, libnm indeed validates the key at the time of setting it
instead of doing lazy validation later. This makes sense, so we can
keep this promise and just expose the validation result to the caller.
Another downside of this is that the API just got more complicated.
But it not provides a validation API, that we previously did not have.
(cherry picked from commit d7bc1750c1)
The defaults for test timeouts in meson is 30 seconds. That is not long
enough when running
$ NMTST_USE_VALGRIND=1 ninja -C build test
Note that meson supports --timeout-multiplier, and automatically
increases the timeout when running under valgrind. However, meson
does not understand that we are running tests under valgrind via
NMTST_USE_VALGRIND=1 environment variable.
Timeouts are really not expected to be reached and are a mean of last
resort. Hence, increasing the timeout to a large value is likely to
have no effect or to fix test failures where the timeout was too rigid.
It's unlikely that the test indeed hangs and the increase of timeout
causes a unnecessary increase of waittime before aborting.
NMSockAddrEndpoint is an immutable structure that contains the endpoint
string of a service. It also includes the (naive) parsing of the host and
port/service parts.
This will be used for the endpoint of WireGuard's peers. But since endpoints
are not something specific to WireGuard, give it a general name (and
purpose) independent from WireGuard.
Essentially, this structure takes a string in a manner that libnm
understands, and uses it for node and service arguments for
getaddrinfo().
NMSockAddrEndpoint allows to have endpoints that are not parsable into
a host and port part. That is useful because our settings need to be
able to hold invalid values. That is for forward compatibility (server
sends a new endpoint format) and for better error handling (have
invalid settings that can be constructed without loss, but fail later
during the NMSetting:verify() step).
The flags NM_SETTING_COMPARE_FLAG_IGNORE_AGENT_OWNED_SECRETS and
NM_SETTING_COMPARE_FLAG_IGNORE_NOT_SAVED_SECRETS act on the secret flags
to decide whether to ignore a secret.
But there was not test how this behaved, if the two settings had
differing flags.
We should no longer use nm_connection_for_each_setting_value() and
nm_setting_for_each_value(). It's fundamentally broken as it does
not work with properties that are not backed by a GObject property
and it cannot be fixed because it is public API.
Add an internal function _nm_connection_aggregate() to replace it.
Compare the implementation of the aggregation functionality inside
libnm with the previous two checks for secret-flags that it replaces:
- previous approach broke abstraction and require detailed knowledge of
secret flags. Meaning, they must special case NMSettingVpn and
GObject-property based secrets.
If we implement a new way for implementing secrets (like we will need
for WireGuard), then this the new way should only affect libnm-core,
not require changes elsewhere.
- it's very inefficient to itereate over all settings. It involves
cloning and sorting the list of settings, and retrieve and clone all
GObject properties. Only to look at secret properties alone.
_nm_connection_aggregate() is supposed to be more flexible then just
the two new aggregate types that perform a "find-any" search. The
@arg argument and boolean return value can suffice to implement
different aggregation types in the future.
Also fixes the check of NMAgentManager for secret flags for VPNs
(NM_CONNECTION_AGGREGATE_ANY_SYSTEM_SECRET_FLAGS). A secret for VPNs
is a property that either has a secret or a secret-flag. The previous
implementation would only look at present secrets and
check their flags. It wouldn't check secret-flags that are
NM_SETTING_SECRET_FLAG_NONE, but have no secret.
- previously, writer would use nm_keyfile_plugin_kf_set_integer() for
G_TYPE_UINT types.
That means, values larger than G_MAXINT would be stored as negative
values. On the other hand, the reader would always reject negative
values.
Fix that, by parsing the integer ourself.
Note that we still reject the old (negative) values and there is no
compatibility for accepting such values. They were not accepted by
reader in the past and so they are still rejected.
This affects for example ethernet.mtu setting (arguably, the MTU
is usually set to small values where the issue was not apparent).
This is also covered by a test.
- no longer use nm_keyfile_plugin_kf_set_integer().
nm_keyfile_plugin_kf_set_integer() calls g_key_file_get_integer(), which
uses g_key_file_parse_integer_as_value(). That one has the odd
behavior of accepting "<number><whitespace><bogus>" as valid. Note how that
differs from g_key_file_parse_value_as_double() which rejects trailing data.
Implement the parsing ourself. There are some changes here:
- g_key_file_parse_value_as_integer() uses strtol() with base 10.
We no longer require a certain the base, so '0x' hex values are allowed
now as well.
- bogus suffixes are now rejected but were accepted by g_key_file_parse_value_as_integer().
We however still accept leading and trailing whitespace, as before.
- use nm_g_object_set_property*(). g_object_set() asserts that the value
is in range. We cannot pass invalid values without checking that they
are valid.
- emit warnings when values cannot be parsed. Previously they would
have been silently ignored or fail an assertion during g_object_set().
- don't use "helpers" like nm_keyfile_plugin_kf_set_uint64(). These
merely call GKeyFile's setters (taking care of aliases). The setters
of GKeyFile don't do anything miraculously, they merely call
g_key_file_set_value() with the string that one would expect.
Convert the numbers/boolean ourselfs. For one, we don't require
a heap allocation to convert a number to string. Also, there is
no point in leaving this GKeyFile API, because even if GKeyFile
day would change, we still must continue to support the present
format, as that is what users have on disk. So, even if a new
way would be implemented by GKeyFile, the current way must forever
be accepted too. Hence, we don't need this abstraction.
It's not yet used, but it will be. We will need nm_sd_utils_unbase64mem()
to strictly validate WireGuard settings, which contain keys in base64 encoding.
Note that we also need a stub implementation for logging. This will do
nothing for all logging from "libnm-systemd-shared.a". This makes
sense because "libnm.so" as a library should not log directly. Also,
"libnm.so" will only use a small portion of "libnm-systemd-shared.a" which
doesn't log anything. Thus this code is unused and dropped by the linker
with "--gc-sections".
While nm_utils_inet*_ntop() accepts a %NULL buffer to fallback
to a static buffer, don't do that.
I find the possibility of using a static buffer here error prone
and something that should be avoided. There is of course the downside,
that in some cases it requires an additional line of code to allocate
the buffer on the stack as auto-variable.
Taken from systemd's in4_addr_netmask_to_prefixlen().
Yes, this adds the requirement that "int" is 32 bits. But systemd
already has the same requirement in u32ctz(), hence we anyway cannot
build on other architectures. If that is ever necessary, it's easy
to adjust.
The 'number' property in GSM settings is a legacy thing that comes
from when ModemManager used user-provided numbers, if any, to connect
3GPP modems.
Since ModemManager 1.0, this property is completely unused for 3GPP
modems, and so it doesn't make sense to use it in the NetworkManager
settings. Ofono does not use it either.
For AT+PPP-based 3GPP modems, the 'number' to call to establish the
data connection is decided by ModemManager itself, e.g. for standard
GSM/UMTS/LTE modems it will connect a given predefined PDP context,
and for other modems like Iridium it will have the number to call
hardcoded in the plugin itself.
https://github.com/NetworkManager/NetworkManager/pull/261
The entire point of using version 3/5 UUIDs is to generate
stable UUIDs based on a string. It's usually important that
we don't change the UUID generation algorithm later on.
Since we didn't have a version 5 implementation, we would always
resort to the MD5 based version 3. Version 5 is recommended by RFC 4122:
o Choose either MD5 [4] or SHA-1 [8] as the hash algorithm; If
backward compatibility is not an issue, SHA-1 is preferred.
Add a version 5 implementation so we can use it in the future.
All test values are generated with python's uuid module or OSSP uuid.
keyfile already supports omitting the "connection.id" and
"connection.uuid". In that case, the ID would be taken from the
keyfile's name, and the UUID was generated by md5 hashing the
full filename.
No longer do this during nm_keyfile_read(), instead let all
callers call nm_keyfile_read_ensure_*() to their liking. This is done
for two reasons:
- a minor reason is, that one day we want to expose keyfile API
as public API. That means, we also want to read keyfiles from
stdin, where there is no filename available. The implementation
which parses stdio needs to define their own way of auto-generating
ID and UUID. Note how nm_keyfile_read()'s API no longer takes a
filename as argument, which would be awkward for the stdin case.
- Currently, we only support one keyfile directory, which (configurably)
is "/etc/NetworkManager/system-connections".
In the future, we want to support multiple keyfile dirctories, like
"/var/run/NetworkManager/profiles" or "/usr/lib/NetworkManager/profiles".
Here we want that a file "foo" (which does not specify a UUID) gets the
same UUID regardless of the directory it is in. That seems better, because
then the UUID won't change as you move the file between directories.
Yes, that means, that the same UUID will be provided by multiple
files, but NetworkManager must already cope with that situation anyway.
Unfortunately, the UUID generation scheme hashes the full path. That
means, we must hash the path name of the file "foo" inside the
original "system-connections" directory.
Refactor the code so that it accounds for a difference between the
filename of the keyfile, and the profile_dir used for generating
the UUID.
NMSetting8021x has various utility functions to set
the certificate:
- nm_setting_802_1x_set_ca_cert()
- nm_setting_802_1x_set_client_cert()
- nm_setting_802_1x_set_private_key()
- nm_setting_802_1x_set_phase2_ca_cert()
- nm_setting_802_1x_set_phase2_client_cert()
- nm_setting_802_1x_set_phase2_private_key()
They support:
- accepting a plain PKCS11 URI, with scheme set to
NM_SETTING_802_1X_CK_SCHEME_PKCS11.
- accepting a filename, with scheme set to
NM_SETTING_802_1X_CK_SCHEME_BLOB or
NM_SETTING_802_1X_CK_SCHEME_PATH.
In the latter case, the function tries to load the file and verify it.
In case of the private-key setters, this also involves accepting a
password. Depending on whether the scheme is BLOB or PATH, the function
will either set the certificate to a PATH blob, or take the blob that
was read from file.
The functions seem misdesigned to me, because their behavior is
rather obscure. E.g. they behave fundamentally different, depending
on whether scheme is PKCS11 or BLOB/PATH.
Anyway, improve them:
- refactor the common code into a function _cert_impl_set(). Previously,
their non-trivial implementations were copy+pasted several times,
now they all use the same implementation.
- if the function is going to fail, don't touch the setting. Previously,
the functions would first clear the certificate before trying to
validate the input. It's more logical, that if a functions is going
to fail to check for failure first and don't modify the settings.
- not every blob can be represented. For example, if we have a blob
which starts with "file://", then there is no way to set it, simply
because we don't support a prefix for blobs (like "data:;base64,").
This means, if we try to set the certificate to a particular binary,
we must check that the binary is interpreted with the expected scheme.
Add this check.
It's only used for testing, so this change is not very relevant.
Anyway, I think our crypto code should succeed in not leaving
key material in memory. Refactor the code to do that, though,
how the pem file gets composed is quite a hack (for tests good
enough though).
nm_utils_rsa_key_encrypt() is internal API which is only uesd for testing.
Move it to nm-crypto.h (where it fits better) and rename it to make the
testing-aspect obvious.
The GBytes has a suitable cleanup function, which zeros the certificate
from memory.
Also, all callers that require the certificate, actually later converted
it into a GBytes anyway. This way, they can re-used the same instance
(avoiding an additionaly copying of the data), and they will properly
clear the memory when freed.
Follow our convention, that items in headers are all named with
an "NM" prefix.
Also, "nm-crypto-impl.h" contains internal functions that are to be implemented
by the corresponding crypto backends. Distinguish their names as well.
There are two aspects: the public crypto API that is provided by
"nm-crypto.h" header, and the internal header which crypto backends
need to implement. Split them.
There should be a clear distinction between whether an array
is a NUL terminated string or binary with a length.
crypto_md5_hash() is already complicated enough. Adjust it's
API to only support binary arguments, and thus have "guint8 *" type.
We already have nm_utils_str_utf8safe_escape() to convert a
NUL termianted string to an UTF-8 string. nm_utils_str_utf8safe_escape()
operates under the assumption, that the input strig is already valid UTF-8
and returns the input string verbatim. That way, in the common expected
cases, the string just looks like a regular UTF-8 string.
However, in case there are invalid UTF-8 sequences (or a backslash
escape characters), the function will use backslash escaping to encode
the input string as a valid UTF-8 sequence. Note that the escaped
sequence, can be reverted to the original non-UTF-8 string via
unescape.
An example, where this is useful are file names or interface names.
Which are not in a defined encoding, but NUL terminated and commonly ASCII or
UTF-8 encoded.
Extend this, to also handle not NUL terminated buffers. The same
applies, except that the process cannot be reverted via g_strcompress()
-- because the NUL character cannot be unescaped.
This will be useful to escape a Wi-Fi SSID. Commonly we expect the SSID
to be in UTF-8/ASCII encoding and we want to print it verbatim. Only
if that is not the case, we fallback to backslash escaping. However, the
orginal value can be fully recovered via unescape(). The difference
between an SSID and a filename is, that the former can contain '\0'
bytes.
Note that in NetworkManager API (D-Bus, libnm, and nmcli),
the features are called "feature-xyz". The "feature-" prefix
is used, because NMSettingEthtool possibly will gain support
for options that are not only -K|--offload|--features, for
example -C|--coalesce.
The "xzy" suffix is either how ethtool utility calls the feature
("tso", "rx"). Or, if ethtool utility specifies no alias for that
feature, it's the name from kernel's ETH_SS_FEATURES ("tx-tcp6-segmentation").
If possible, we prefer ethtool utility's naming.
Also note, how the features "feature-sg", "feature-tso", and
"feature-tx" actually refer to multiple underlying kernel features
at once. This too follows what ethtool utility does.
The functionality is not yet implemented server-side.
Add a new option that allows to activate a profile multiple times
(at the same time). Previoulsy, all profiles were implicitly
NM_SETTING_CONNECTION_MULTI_CONNECT_SINGLE, meaning, that activating
a profile that is already active will deactivate it first.
This will make more sense, as we also add more match-options how
profiles can be restricted to particular devices. We already have
connection.type, connection.interface-name, and (ethernet|wifi).mac-address
to restrict a profile to particular devices. For example, it is however
not possible to specify a wildcard like "eth*" to match a profile to
a set of devices by interface-name. That is another missing feature,
and once we extend the matching capabilities, it makes more sense to
activate a profile multiple times.
See also https://bugzilla.redhat.com/show_bug.cgi?id=997998, which
previously changed that a connection is restricted to a single activation
at a time. This work relaxes that again.
This only adds the new property, it is not used nor implemented yet.
https://bugzilla.redhat.com/show_bug.cgi?id=1555012