nm_auth_chain_new_subject() cannot return %NULL, so these checks are only
noise. Also, there are already calls that correctly rely on the fact that
this function cannot fail.
We use glib, where memory allocation by definition cannot fail. That means,
a lot of functions simply cannot fail in our code base. This is a very nice
property (to have an functions that cannot fail), so don't add error
checking that is not useful.
For historic reasons is NMSettingBond implemented differently from other
settings. It uses a strdict, and adds some validation on top of that.
The idea was probably to be able to treat bond options more generically.
But in practice we cannot treat them as opaque values, but need to know,
validate and understand all the options. Thus, this implementation with a
strdict is not nice.
The user can set the GObject property NM_SETTING_BOND_OPTIONS to any
strdict, and the setter performs no validation or normalization. That
is probably good, because g_object_set() cannot return an error to
signalize invalid settings. As often, we have corresponding C API like
nm_setting_bond_add_option() and nm_setting_bond_remove_option(). It
should be possible to get the same result both with the C API and with
the GObject property setting. Since there is already a way to set
certain invalid values, it does not help if the C API tries to prevent
that. That implies, that also add-option does not perform additional
validation and sets whatever the user asks.
Remove all validation from nm_setting_bond_add_option() and
nm_setting_bond_remove_option(). This validation was anyway only very
basic. It was calling nm_setting_bond_validate_option(), which can check
whether the string is (for example) and integer, but it cannot do
validation beyond one option. In most cases, the validation needs to
take into account the bond mode or other options, so validating one
option in isolation is not very useful.
Proper validation should instead be done via nm_connection_verify().
However, due to another historic oddity, that verification is very
forgiving too and doesn't reject many invalid settings when it should.
That is hard to fix, because making validation more strict can break
existing (and working) configurations. However, verify() already contains
basic validation via nm_setting_bond_validate_option(). So in the previous
behavior nm_setting_bond_add_option() would silently do nothing (only
returning %FALSE) for invalid options, while now it would add the
invalid options to the dictionary -- only to have it later fail validation
during nm_connection_verify(). That is a slight change in behavior, however it
seems preferable.
It seems preferable and acceptable because most users that call
nm_setting_bond_add_option() already understand the meaning and valid
values. Keyfile and ifcfg-rh readers are the few exceptions, which really just
parse a string dictionary, without need to understand them. But nmtui
or nmstate already know the option they want to set. They don't expect
a failure there, nor do they need the validation.
Note that this change in behavior could be dangerous for example for the
keyfile/ifcfg-rh readers, which silently ignored errors before. We
don't want them to start failing if they read invalid options from a
file, so instead let those callers explicitly pre-validate the value
and log an warning.
https://bugzilla.redhat.com/show_bug.cgi?id=1887523
Previously, both nm_setting_connection_add_permission() and the GObject
property setter would merely assert that the provided values are valid
(and otherwise don't do anything). That is bad for handling errors.
For example, we use the property setter to initialize the setting from
keyfile and GVariant (D-Bus). That means, if a user provides an invalid
permissions value, we would emit a g_critical() assertion failure, but
otherwise ignore the configuration. What we instead need to do is to
accept the value, and afterwards fail verification. That way, a proper error
message can be generated.
$ mcli connection add type ethernet autoconnect no ifname bogus con-name x connection.permissions 'bogus:'
(process:429514): libnm-CRITICAL **: 12:12:00.359: permission_new: assertion 'strchr (uname, ':') == NULL' failed
(process:429514): libnm-CRITICAL **: 12:12:00.359: nm_setting_connection_add_permission: assertion 'p != NULL' failed
Connection 'x' (2802d117-f84e-44d9-925b-bfe26fd85da1) successfully added.
$ $ nmcli -f connection.permissions connection show x
connection.permissions: --
While at it, also don't track the permissions in a GSList. Tracking one
permission in a GSList requires 3 allocations (one for the user string,
one for the Permission struct, and one for the GSList struct). Instead,
use a GArray. That is still not great, because GArray cannot be embedded
inside NMSettingConnectionPrivate, so tracking one permission also
requires 3 allocations (which is really a fault of GArray). So, GArray
is not better in the common case where there is only one permissions. But even
in the worst case (only one entry), GArray is no worse than GSList.
Also change the API of nm_setting_connection_add_permission().
Previously, the function would assert that the arguments are in
a certain form (strcmp (ptype, "user") == 0), but still document
the such behaviors like regular operation ("[returns] %FALSE if @ptype
or @pitem was invalid"). Don't assert against the function arguments.
Also, if you first set the user to "fo:o", then
nm_setting_connection_add_permission() would accept it -- only at
a later phase, the property setter would assert against such values.
Also, the function would return %FALSE both if the input value was
invalid (an error) and if the value already existed. I think the
function should not treat a duplicate entry like a badly formatted
input.
Now the function does much less asserting of the arguments, but will
return %FALSE only if the values are invalid. And it will silently ignore
duplicate entries.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/636
Run:
./contrib/scripts/nm-code-format.sh -i
./contrib/scripts/nm-code-format.sh -i
Yes, it needs to run twice because the first run doesn't yet produce the
final result.
Signed-off-by: Antonio Cardace <acardace@redhat.com>
nm_utils_hexstr2bin_full() is our general hexstr to binary parsing
method. It uses (either mandatory or optional) delimiters. Before,
if delimiters are in use, it would accept individual hexdigits.
E.g. "a:b" would be accepted as "0a:0b:.
Add an argument that prevents accepting such single digits.
"XXX" is used for tagging parts of code that still need work before
merging a patch. If you want to highlight/mark a comment which is merged
use either "TODO" or "FIXME".
Of course, even "TODO" and "FIXME" should be avoided in favor of just
doing/fixing it. Such things tend to never be done/fixed.
Seems with LTO the compiler can sometimes think that thes variables are
uninitialized. Usually those code paths are only after an assertion was
hit (g_return*()), but we still need to workaround the warning.
Imagine we wait for a device, the device appears and starts activating.
That might take a while (during which it has a pending action). In the
meantime, the "connection.wait-device-timeout" timeout expires.
Now we want to log a warning about profiles that don't have their
device upon timeout. However, that the device is still busy at that
point is irrelevant. Skip logging a message about those profiles.
Fixes: 3df662f534 ('settings: rework wait-device-timeout handling and consider device compatibility')
A profile can configure "connection.wait-device-timeout" to indicate
that startup complete is blocked until a suitable device around.
This is useful for NetworkManager-wait-online and initrd mode.
Previously, we looked at NMPlatform whether a link with matching
interface-name was present. That is wrong because it cannot handle
profiles that rely on "ethernet.mac-address" setting or other "match"
settings. Also, the mere presence of the link does not yet mean
that the NMDevice was created and ready. In fact, there is a race here:
NMPlatform indicates that the device is ready (unblocking NMSettings),
but there is no corresponding NMDevice yet which keeps NetworkManager
busy to block startup complete.
Rework this. Now, only check whether there is a compatible device for
the profile.
Since we wait for compatible devices, it works now not only for the
interface name. Note that we do some optimizations so that we don't have
to re-evaluate all profiles (w.r.t. all devices) whenever something on the
device changes: we only care about this when all devices finally become
ready.
Also, we no longer start the timeout for "connection.wait-device-timeout"
when the profile appears. Instead, there is one system-wide start time
(NMSettingsPrivate.startup_complete_start_timestamp_msec). That simplifies
code and makes sense: we start waiting when NetworkManager is starting, not
when the profile gets added. Also, we wait for all profiles to become
ready together.
NMSettings needs access to the list of all devices, which is tracked
by NMManager. Of course, this ties NMSettings and NMManager closer
together. Note that NMManager already owns a reference to NMSettings,
so they are in fact related.
The alternatives of just letting NMSettings reference NMManager (and
vice versa) would be more complicated, and likely not help to simplify
the code (on the contrary).
They serve a similar purpose.
Previously, nm-json-aux.h contained the virtual function table for accessing
the dynamically loaded libjansson. But there is no reason why our own
helper functions from nm-json.h cannot be there too.
We anyway load libjansson with dlopen(), and already before it could
happen that libjansson is not available. In that case, we would not
crash, but simply proceed without json validation.
Since libnm-core no longer uses libjansson directly, but only via
"nm-glib-aux/nm-json.h", we can just always compile with that, and use
it at runtime. That means, libjansson is not a build dependency for
libnm anymore, so we don't need a compile time check.
Note that if you build without libjansson, then JANSSON_SONAME is
undefined, and loading it will still fail at runtime. So, even if
we now always build with all our code enabled, it only works if you
actually build with libjansson. Still, it's simpler to drop the
conditional build, as the only benefit is a (minimally) smaller
build.
Code like "get_setting_default_uint (s_bridge, NM_SETTING_BRIDGE_FORWARD_DELAY)" looks
up the default value of the GObject property. That default value is
known at build type. Looking it up is an unnecessary overhead, for
something that is already known.
Also, the code isn't generic (meaning, it doesn't iterate of a set of
properties names and treats them without explicitly naming each
property). If we already name the property for which we want the default
value, we can just as well name the default value.
Additionally, add an assertion that what we would look up matches
to what we think is the default.
Add a new "path" property to the match setting, which can be used to
restrict a connection to devices with a given hardware path. The new
property is a list of patterns that are matched against the ID_PATH
udev property of devices.
ID_PATH represents the topological persistent path of a device and
typically contains a subsystem string (pci, usb, platform, etc.) and a
subsystem-specific identifier. Some examples of paths are:
pci-0000:00:02.0
pci-0000:00:14.0-usb-0:5:1.0
platform-1c40000.ethernet
systemd-networkd also has a "Path=" option to match a device by udev
ID_PATH.
When a device is not marked as unmanaged, but also not actively managed
by NetworkManager, then NetworkManager will generate an in-memory
profile to represent the active state, if the device is up and
configured (with an IP address).
Such profiles are commonly named like "eth0", and they are utterly
confusing to users, because they look as if NetworkManager actually
manages the device, when it really just shows that somebody else configures
the device.
We should express this better in the UI, hence add flags to indicate
that.
In practice, such profiles are UNSAVED, NM_GENERATED, and VOLATILE. But
add an explicit flag to represent that.
https://bugzilla.redhat.com/show_bug.cgi?id=1816202
nm_keyfile_read() and nm_keyfile_write() will be public API.
As such, it must be flexible and extendible for future needs.
There is already the handler callback that fully solves this
(e.g. a future handler event could request whether a certain
behavior is enabled or not).
As additional possibility for future extension, add a flags
argument. Currently no flags are implemented.
From inside a callback 4 properties are potentially interesting
to all callbacks: the currenty group, key, setting and property-name.
Refactor the code to track these properties in NMKeyfileHandlerData
and distinguish between the property name and the keyfile key.
Setting the error on the callback does not work well from bindings.
Instead, let bindings call a (future) nm_keyfile_handler_data_fail_with_error()
function on the handler_data to indicate failure.
As the keyfile handler callback will become public API, it needs to be
usable via bindings. A plain void pointer is not usable. Instead, add
a new type that can be used via introspection.
This will become public API. The enum for read and write callback
serves very similar purposes. Merge them so that we have fewer
types in the public API.
We don't need the mask argument. If the caller wants to check only for certain
flags, she can do that right away with
NM_FLAGS_ANY (nm_settings_connection_autoconnect_blocked_reason_get (sett_con), flags)
When the secrets for a connection are updated, unblock autoconnection
in case it had been blocked previously due to bad or no
secrets. Otherwise we would need to manually activate the connection
or restart NM to get another try with the new secrets.
Updating the timestamp marks the keyfile database as dirty. Avoid
that, if there is no change. Of course, nm_key_file_db_set_value()
itself already checks whether the are any changes, and does nothing
if there aren't.
Simply perform the check earlier, to do nothing.