We should no longer use nm_connection_for_each_setting_value() and
nm_setting_for_each_value(). It's fundamentally broken as it does
not work with properties that are not backed by a GObject property
and it cannot be fixed because it is public API.
Add an internal function _nm_connection_aggregate() to replace it.
Compare the implementation of the aggregation functionality inside
libnm with the previous two checks for secret-flags that it replaces:
- previous approach broke abstraction and require detailed knowledge of
secret flags. Meaning, they must special case NMSettingVpn and
GObject-property based secrets.
If we implement a new way for implementing secrets (like we will need
for WireGuard), then this the new way should only affect libnm-core,
not require changes elsewhere.
- it's very inefficient to itereate over all settings. It involves
cloning and sorting the list of settings, and retrieve and clone all
GObject properties. Only to look at secret properties alone.
_nm_connection_aggregate() is supposed to be more flexible then just
the two new aggregate types that perform a "find-any" search. The
@arg argument and boolean return value can suffice to implement
different aggregation types in the future.
Also fixes the check of NMAgentManager for secret flags for VPNs
(NM_CONNECTION_AGGREGATE_ANY_SYSTEM_SECRET_FLAGS). A secret for VPNs
is a property that either has a secret or a secret-flag. The previous
implementation would only look at present secrets and
check their flags. It wouldn't check secret-flags that are
NM_SETTING_SECRET_FLAG_NONE, but have no secret.
- previously, writer would use nm_keyfile_plugin_kf_set_integer() for
G_TYPE_UINT types.
That means, values larger than G_MAXINT would be stored as negative
values. On the other hand, the reader would always reject negative
values.
Fix that, by parsing the integer ourself.
Note that we still reject the old (negative) values and there is no
compatibility for accepting such values. They were not accepted by
reader in the past and so they are still rejected.
This affects for example ethernet.mtu setting (arguably, the MTU
is usually set to small values where the issue was not apparent).
This is also covered by a test.
- no longer use nm_keyfile_plugin_kf_set_integer().
nm_keyfile_plugin_kf_set_integer() calls g_key_file_get_integer(), which
uses g_key_file_parse_integer_as_value(). That one has the odd
behavior of accepting "<number><whitespace><bogus>" as valid. Note how that
differs from g_key_file_parse_value_as_double() which rejects trailing data.
Implement the parsing ourself. There are some changes here:
- g_key_file_parse_value_as_integer() uses strtol() with base 10.
We no longer require a certain the base, so '0x' hex values are allowed
now as well.
- bogus suffixes are now rejected but were accepted by g_key_file_parse_value_as_integer().
We however still accept leading and trailing whitespace, as before.
- use nm_g_object_set_property*(). g_object_set() asserts that the value
is in range. We cannot pass invalid values without checking that they
are valid.
- emit warnings when values cannot be parsed. Previously they would
have been silently ignored or fail an assertion during g_object_set().
- don't use "helpers" like nm_keyfile_plugin_kf_set_uint64(). These
merely call GKeyFile's setters (taking care of aliases). The setters
of GKeyFile don't do anything miraculously, they merely call
g_key_file_set_value() with the string that one would expect.
Convert the numbers/boolean ourselfs. For one, we don't require
a heap allocation to convert a number to string. Also, there is
no point in leaving this GKeyFile API, because even if GKeyFile
day would change, we still must continue to support the present
format, as that is what users have on disk. So, even if a new
way would be implemented by GKeyFile, the current way must forever
be accepted too. Hence, we don't need this abstraction.
It's not yet used, but it will be. We will need nm_sd_utils_unbase64mem()
to strictly validate WireGuard settings, which contain keys in base64 encoding.
Note that we also need a stub implementation for logging. This will do
nothing for all logging from "libnm-systemd-shared.a". This makes
sense because "libnm.so" as a library should not log directly. Also,
"libnm.so" will only use a small portion of "libnm-systemd-shared.a" which
doesn't log anything. Thus this code is unused and dropped by the linker
with "--gc-sections".
While nm_utils_inet*_ntop() accepts a %NULL buffer to fallback
to a static buffer, don't do that.
I find the possibility of using a static buffer here error prone
and something that should be avoided. There is of course the downside,
that in some cases it requires an additional line of code to allocate
the buffer on the stack as auto-variable.
Taken from systemd's in4_addr_netmask_to_prefixlen().
Yes, this adds the requirement that "int" is 32 bits. But systemd
already has the same requirement in u32ctz(), hence we anyway cannot
build on other architectures. If that is ever necessary, it's easy
to adjust.
The 'number' property in GSM settings is a legacy thing that comes
from when ModemManager used user-provided numbers, if any, to connect
3GPP modems.
Since ModemManager 1.0, this property is completely unused for 3GPP
modems, and so it doesn't make sense to use it in the NetworkManager
settings. Ofono does not use it either.
For AT+PPP-based 3GPP modems, the 'number' to call to establish the
data connection is decided by ModemManager itself, e.g. for standard
GSM/UMTS/LTE modems it will connect a given predefined PDP context,
and for other modems like Iridium it will have the number to call
hardcoded in the plugin itself.
https://github.com/NetworkManager/NetworkManager/pull/261
The entire point of using version 3/5 UUIDs is to generate
stable UUIDs based on a string. It's usually important that
we don't change the UUID generation algorithm later on.
Since we didn't have a version 5 implementation, we would always
resort to the MD5 based version 3. Version 5 is recommended by RFC 4122:
o Choose either MD5 [4] or SHA-1 [8] as the hash algorithm; If
backward compatibility is not an issue, SHA-1 is preferred.
Add a version 5 implementation so we can use it in the future.
All test values are generated with python's uuid module or OSSP uuid.
keyfile already supports omitting the "connection.id" and
"connection.uuid". In that case, the ID would be taken from the
keyfile's name, and the UUID was generated by md5 hashing the
full filename.
No longer do this during nm_keyfile_read(), instead let all
callers call nm_keyfile_read_ensure_*() to their liking. This is done
for two reasons:
- a minor reason is, that one day we want to expose keyfile API
as public API. That means, we also want to read keyfiles from
stdin, where there is no filename available. The implementation
which parses stdio needs to define their own way of auto-generating
ID and UUID. Note how nm_keyfile_read()'s API no longer takes a
filename as argument, which would be awkward for the stdin case.
- Currently, we only support one keyfile directory, which (configurably)
is "/etc/NetworkManager/system-connections".
In the future, we want to support multiple keyfile dirctories, like
"/var/run/NetworkManager/profiles" or "/usr/lib/NetworkManager/profiles".
Here we want that a file "foo" (which does not specify a UUID) gets the
same UUID regardless of the directory it is in. That seems better, because
then the UUID won't change as you move the file between directories.
Yes, that means, that the same UUID will be provided by multiple
files, but NetworkManager must already cope with that situation anyway.
Unfortunately, the UUID generation scheme hashes the full path. That
means, we must hash the path name of the file "foo" inside the
original "system-connections" directory.
Refactor the code so that it accounds for a difference between the
filename of the keyfile, and the profile_dir used for generating
the UUID.
NMSetting8021x has various utility functions to set
the certificate:
- nm_setting_802_1x_set_ca_cert()
- nm_setting_802_1x_set_client_cert()
- nm_setting_802_1x_set_private_key()
- nm_setting_802_1x_set_phase2_ca_cert()
- nm_setting_802_1x_set_phase2_client_cert()
- nm_setting_802_1x_set_phase2_private_key()
They support:
- accepting a plain PKCS11 URI, with scheme set to
NM_SETTING_802_1X_CK_SCHEME_PKCS11.
- accepting a filename, with scheme set to
NM_SETTING_802_1X_CK_SCHEME_BLOB or
NM_SETTING_802_1X_CK_SCHEME_PATH.
In the latter case, the function tries to load the file and verify it.
In case of the private-key setters, this also involves accepting a
password. Depending on whether the scheme is BLOB or PATH, the function
will either set the certificate to a PATH blob, or take the blob that
was read from file.
The functions seem misdesigned to me, because their behavior is
rather obscure. E.g. they behave fundamentally different, depending
on whether scheme is PKCS11 or BLOB/PATH.
Anyway, improve them:
- refactor the common code into a function _cert_impl_set(). Previously,
their non-trivial implementations were copy+pasted several times,
now they all use the same implementation.
- if the function is going to fail, don't touch the setting. Previously,
the functions would first clear the certificate before trying to
validate the input. It's more logical, that if a functions is going
to fail to check for failure first and don't modify the settings.
- not every blob can be represented. For example, if we have a blob
which starts with "file://", then there is no way to set it, simply
because we don't support a prefix for blobs (like "data:;base64,").
This means, if we try to set the certificate to a particular binary,
we must check that the binary is interpreted with the expected scheme.
Add this check.
It's only used for testing, so this change is not very relevant.
Anyway, I think our crypto code should succeed in not leaving
key material in memory. Refactor the code to do that, though,
how the pem file gets composed is quite a hack (for tests good
enough though).
nm_utils_rsa_key_encrypt() is internal API which is only uesd for testing.
Move it to nm-crypto.h (where it fits better) and rename it to make the
testing-aspect obvious.
The GBytes has a suitable cleanup function, which zeros the certificate
from memory.
Also, all callers that require the certificate, actually later converted
it into a GBytes anyway. This way, they can re-used the same instance
(avoiding an additionaly copying of the data), and they will properly
clear the memory when freed.
Follow our convention, that items in headers are all named with
an "NM" prefix.
Also, "nm-crypto-impl.h" contains internal functions that are to be implemented
by the corresponding crypto backends. Distinguish their names as well.
There are two aspects: the public crypto API that is provided by
"nm-crypto.h" header, and the internal header which crypto backends
need to implement. Split them.
There should be a clear distinction between whether an array
is a NUL terminated string or binary with a length.
crypto_md5_hash() is already complicated enough. Adjust it's
API to only support binary arguments, and thus have "guint8 *" type.
We already have nm_utils_str_utf8safe_escape() to convert a
NUL termianted string to an UTF-8 string. nm_utils_str_utf8safe_escape()
operates under the assumption, that the input strig is already valid UTF-8
and returns the input string verbatim. That way, in the common expected
cases, the string just looks like a regular UTF-8 string.
However, in case there are invalid UTF-8 sequences (or a backslash
escape characters), the function will use backslash escaping to encode
the input string as a valid UTF-8 sequence. Note that the escaped
sequence, can be reverted to the original non-UTF-8 string via
unescape.
An example, where this is useful are file names or interface names.
Which are not in a defined encoding, but NUL terminated and commonly ASCII or
UTF-8 encoded.
Extend this, to also handle not NUL terminated buffers. The same
applies, except that the process cannot be reverted via g_strcompress()
-- because the NUL character cannot be unescaped.
This will be useful to escape a Wi-Fi SSID. Commonly we expect the SSID
to be in UTF-8/ASCII encoding and we want to print it verbatim. Only
if that is not the case, we fallback to backslash escaping. However, the
orginal value can be fully recovered via unescape(). The difference
between an SSID and a filename is, that the former can contain '\0'
bytes.
Note that in NetworkManager API (D-Bus, libnm, and nmcli),
the features are called "feature-xyz". The "feature-" prefix
is used, because NMSettingEthtool possibly will gain support
for options that are not only -K|--offload|--features, for
example -C|--coalesce.
The "xzy" suffix is either how ethtool utility calls the feature
("tso", "rx"). Or, if ethtool utility specifies no alias for that
feature, it's the name from kernel's ETH_SS_FEATURES ("tx-tcp6-segmentation").
If possible, we prefer ethtool utility's naming.
Also note, how the features "feature-sg", "feature-tso", and
"feature-tx" actually refer to multiple underlying kernel features
at once. This too follows what ethtool utility does.
The functionality is not yet implemented server-side.
Add a new option that allows to activate a profile multiple times
(at the same time). Previoulsy, all profiles were implicitly
NM_SETTING_CONNECTION_MULTI_CONNECT_SINGLE, meaning, that activating
a profile that is already active will deactivate it first.
This will make more sense, as we also add more match-options how
profiles can be restricted to particular devices. We already have
connection.type, connection.interface-name, and (ethernet|wifi).mac-address
to restrict a profile to particular devices. For example, it is however
not possible to specify a wildcard like "eth*" to match a profile to
a set of devices by interface-name. That is another missing feature,
and once we extend the matching capabilities, it makes more sense to
activate a profile multiple times.
See also https://bugzilla.redhat.com/show_bug.cgi?id=997998, which
previously changed that a connection is restricted to a single activation
at a time. This work relaxes that again.
This only adds the new property, it is not used nor implemented yet.
https://bugzilla.redhat.com/show_bug.cgi?id=1555012
1) the command line gets shorter. I frequently run `make V=1` to see
the command line arguments for the compiler, and there is a lot
of noise.
2) define each of these variables at one place. This makes it easy
to verify that for all compilation units, a particular
define has the same value. Previously that was not obvious or
even not the case (see commit e5d1a71396
and commit d63cf1ef2f).
The point is to avoid redundancy.
3) not all compilation units need all defines. In fact, most modules
would only need a few of these defines. We aimed to pass the necessary
minium of defines to each compilation unit, but that was non-obvious
to get right and often we set a define that wasn't used. See for example
"src_settings_plugins_ibft_cppflags" which needlessly had "-DSYSCONFDIR".
This question is now entirely avoided by just defining all variables in
a header. We don't care to find the minimum, because every component
gets anyway all defines from the header.
4) this also avoids the situation, where a module that previously did
not use a particular define gets modified to require it. Previously,
that would have required to identify the missing define, and add
it to the CFLAGS of the complation unit. Since every compilation
now includes "config-extra.h", all defines are available everywhere.
5) the fact that each define is now available in all compilation units
could be perceived as a downside. But it isn't, because these defines
should have a unique name and one specific value. Defining the same
name with different values, or refer to the same value by different
names is a bug, not a desirable feature. Since these defines should
be unique accross the entire tree, there is no problem in providing
them to every compilation unit.
6) the reason why we generate "config-extra.h" this way, instead of using
AC_DEFINE() in configure.ac, is due to the particular handling of
autoconf for directory variables. See [1].
With meson, it would be trivial to put them into "config.h.meson".
While that is not easy with autoconf, the "config-extra.h" workaround
seems still preferable to me.
[1] https://www.gnu.org/software/autoconf/manual/autoconf-2.63/html_node/Installation-Directory-Variables.html
We commonly don't use the glib typedefs for char/short/int/long,
but their C types directly.
$ git grep '\<g\(char\|short\|int\|long\|float\|double\)\>' | wc -l
587
$ git grep '\<\(char\|short\|int\|long\|float\|double\)\>' | wc -l
21114
One could argue that using the glib typedefs is preferable in
public API (of our glib based libnm library) or where it clearly
is related to glib, like during
g_object_set (obj, PROPERTY, (gint) value, NULL);
However, that argument does not seem strong, because in practice we don't
follow that argument today, and seldomly use the glib typedefs.
Also, the style guide for this would be hard to formalize, because
"using them where clearly related to a glib" is a very loose suggestion.
Also note that glib typedefs will always just be typedefs of the
underlying C types. There is no danger of glib changing the meaning
of these typedefs (because that would be a major API break of glib).
A simple style guide is instead: don't use these typedefs.
No manual actions, I only ran the bash script:
FILES=($(git ls-files '*.[hc]'))
sed -i \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\>\( [^ ]\)/\1\2/g' \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\> /\1 /g' \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\>/\1/g' \
"${FILES[@]}"
A configuration with duplicate tc qdiscs and tfilters is not valid;
reject it in verify(). Note that nm_setting_tc_config_add_qdisc() and
nm_setting_tc_config_add_tfilter() can't add duplicate entries and so
the only way to achieve an invalid configuration is setting the
properties directly.
https://github.com/NetworkManager/NetworkManager/pull/95
Use two common defines NM_BUILD_SRCDIR and NM_BUILD_BUILDDIR
for specifying the location of srcdir and builddir.
Note that this is only relevant for tests, as they expect
a certain layout of the directories, to find files that concern
them.