CSiphash is a first class citizen, it's fine to use everwhere where we
need it.
NMHash wraps CSiphash and provides three things:
1) Convenience macros that make hashing nicer to use.
2) it uses a randomly generated, per-run hash seed, that can be combined
with a guint static seed.
3) it's a general API for hashing data. It nowhere promises that it
actually uses siphash24, although currently it does everywhere.
NMHash is not (officially) siphash24.
Add API nm_hash_siphash42_init() and nm_hash_siphash42() to "nm-hash-utils.h",
that exposes (2) for use with regular CSiphash. You of course no longer
get the convenice macros (1) but you get plain siphash24 (which
NMHash does not give (3)).
While at it, also add a nm_hash_complete_u64(). Usually, for hasing we
want guint types. But we don't need to hide the fact, that the
underlying value is first uint64. Expose it.
NetworkManager loads (and generates) a secret key as
"/var/lib/NetworkManager/secret_key".
The secret key is used for seeding a per-host component when generating
hashed, stable data. For example, it contributes to "ipv4.dhcp-client-id=duid"
"ipv6.addr-gen-mode=stable-privacy", "ethernet.cloned-mac-address=stable", etc.
As such, it corresponds to the identity of the host.
Also "/etc/machine-id" is the host's identity. When cloning a virtual machine,
it may be a good idea to generate a new "/etc/machine-id", at least in those
cases where the VM's identity shall be different. Systemd provides various
mechanisms for doing that, like accepting a new machine-id via kernel command line.
For the same reason, the user should also regenerate a new NetworkManager's
secrey key when the host's identity shall change. However, that is less obvious,
less understood and less documented.
Support and use a new variant of secret key. This secret key is combined
with "/etc/machine-id" by sha256 hashing it together. That means, when the
user generates a new machine-id, NetworkManager's per-host key also changes.
Since we don't want to change behavior for existing installations, we
only do this when generating a new secret key file. For that, we encode
a version tag inside the "/var/lib/NetworkManager/secret_key" file.
Note that this is all abstracted by nm_utils_secret_key_get(). For
version 2 secret-keys, it internally combines the secret_key file with
machine-id (via sha256). The advantage is that callers don't care that
the secret-key now also contains the machine-id. Also, since we want to
stick to the previous behavior if we have an old secret-key, this is
nicely abstracted. Otherwise, the caller would not only need to handle
two per-host parts, but it would also need to check the version to
determine whether the machine-id should be explicitly included.
At this point, nm_utils_secret_key_get() should be renamed to
nm_utils_host_key_get().
g_file_get_contents() fails with G_FILE_ERROR, G_FILE_ERROR_NOENT when the
file does not exist.
That wasn't obvious to me, nm_utils_error_is_notfound() to the rescue.
Fixes: dbcb1d6d97
For older NetworkManager versions, a match spec that only contained except:
specifiers could never yield a positive match. That is not very useful and
got fixed by commit 242de347adbf653a709607979d36a0da1ca3ff0b (core: fix
device spec matching for a list of "except:").
Still, adjust the configuration snippet so that it also works with
configurations that predate the fix.
If the spec specifies only negative matches (and none of them matches),
then the result shall be positive.
Meaning:
[connection*] match-device=except:dhcp-plugin:dhclient
[connection*] match-device=except:interface-name:eth0
[.config] enabled=except:nm-version:1.14
should be the same as:
[connection*] match-device=*,except:dhcp-plugin:dhclient
[connection*] match-device=*,except:interface-name:eth0
[.config] enabled=*,except:nm-version:1.14
and match by default. Previously, such specs would never yield a
positive match, which seems wrong.
Note that "except:" already has a special meaning. It is not merely
"not:". That is because we don't support "and:" nor grouping, but all
matches are combined by an implicit "or:". With such a meaning, having
a "not:" would be unclear to define. Instead it is defined that any
"except:" match always wins and makes the entire condition to explicitly
not match. As such, it makes sense to treat a match that only consists
of "except:" matches special.
This is a change in behavior, but the alternative meaning makes
little sense.
When we agregate the connectivity state, only devices that
have the best default route should be considered.
Since we do connectivity checking per-device, the per-device check
does not care whether traffic to the internet is really routed via this
device.
But when talking about the global connectivity state, we care mostly
about the (best) default route. So, we should not allow a device with
worse or now default route, to contribute its connectivity state.
Fixes: 6b7e9f9b22
If the URI does not specify a port, we always assumed "80". That is
wrong for https. Arguably, https is discouraged for connectivity checking,
but we still shouldn't break it.
Fixes: 9664f284a1
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacc
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
- use cleanup attribute except explicit free/unref.
- check that the result has the expected address family.
Arguably, it should always, unless there is a bug in systemd-resolved.
By that reasoning, we also wouldn't have to check the address length
either.
- don't use strndup() for values that are later freed by g_free().
We should always agree whether to malloc/free or g_malloc/g_free.
- don't use strcasecmp(). Always use the locale independent g_ascii_strcasecmp()
instead.
- use nm_utils_inet_ntop() instead of inet_ntop(). It's our preferred
wrapper, which as a stricter semantic (for example, it cannot fail
and it's input arguments are stricter defined).
- use nm_clear_g_free() instead of g_clear_pointer().
If the user disabled systemd-resolved, two things seem apparent:
- the user does not want us to use systemd-resolved
- NetworkManager is not pushing the DNS configuration to
systemd-resoved.
It seems to me, we should not consult systemd-resolved in that case.
Each time when enabling/disabling "systemd-resolved" in combination with another
plugin (which is unchanged), another pair of signal handlers was
connected. That's wrong.
Fixes: d4eb4cb45f
In most cases, we don't want the translated string (only marked
for translation, and then programatically the caller deciedes
whether to translate or not).
The few places that always call gettext() can do it explicitly.
Now, that our functions are all "no_l10n" by default, rename them.
While at it, replace "AF_INET" with "IPv". The connectivity check
logging line is already much too long. Save a few characters. Also,
I think the meaning of "AF_INET" is less clear (to a novice user) than
"IPv4".
The NMDBusManager owns a reference to the system bus. Expose it, so we
can use it. Note that g_bus_get() -- as alternative to get the systembus
singleton -- is asynchronous, and g_bus_get_sync() has an API that makes
one wonder what it does. Since we already have a reference to the connection
object we want to use, expose it.
Since we determine the connectivity state of each device individually,
the global connectivity state is an aggregate of all these states.
I am not sure about considering here devices that don't have the (best)
default route for their respective address family. But anyway.
When we aggregate the best connectivity, we chose the numerical largest
value. That is wrong, because PORTAL is numerically smaller than
LIMITED.
That means, if you have two devices, one with connectivity LIMITED and
one with connectivity PORTAL, then LIMITED wrongly wins.
Fixes: 6b7e9f9b22https://bugzilla.redhat.com/show_bug.cgi?id=1619873
By setting "connection.permissions", a profile is restricted to a
particular user.
That means for example, that another user cannot see, modify, delete,
activate or deactivate the profile. It also means, that the profile
will only autoconnect when the user is logged in (has a session).
Note that root is always able to activate the profile. Likewise, the
user is also allowed to manually activate the own profile, even if no
session currently exists (which can easily happen with `sudo`).
When the user logs out (the session goes away), we want do disconnect
the profile, however there are conflicting goals here:
1) if the profile was activate by root user, then logging out the user
should not disconnect the profile. The patch fixes that by not
binding the activation to the connection, if the activation is done
by the root user.
2) if the profile was activated by the owner when it had no session,
then it should stay alive until the user logs in (once) and logs
out again. This is already handled by the previous commit.
Yes, this point is odd. If you first do
$ sudo -u $OTHER_USER nmcli connection up $PROFILE
the profile activates despite not having a session. If you then
$ ssh guest@localhost nmcli device
you'll still see the profile active. However, the moment the SSH session
ends, a session closes and the profile disconnects. It's unclear, how to
solve that any better. I think, a user who cares about this, should not
activate the profile without having a session in the first place.
There are quite some special cases, in particular with internal
activations. In those cases we need to decide whether to bind the
activation to the profile's visibility.
Also, expose the "bind" setting in the D-Bus API. Note, that in the future
this flag may be modified via D-Bus API. Like we may also add related API
that allows to tweak the lifetime of the activation.
Also, I think we broke handling of connection visiblity with 37e8c53eee
"core: Introduce helper class to track connection keep alive". This
should be fixed now too, with improved behavior.
Fixes: 37e8c53eeehttps://bugzilla.redhat.com/show_bug.cgi?id=1530977
A user can always manually activate his/her own profile, even if the
profile is currently is invisible.
-- it could be invisible, because the profile has "connection.permissions"
set but the user has no active session.
Now, if the user should be able to activate such a profile, then it
cannot fail simply because being invisible all along. Instead, the
alive check must only fail, if a connection becomes invisible that was
visible in the time since it is monitored.
Now that the keep-alive instance defaults to ALIVE by default, we can
always arm it when starting to activate the active-connection.
The keep-alive instance may have been armed earlier already:
for example, when binding its lifetime to a D-Bus name or
when watching the connection's visible state.
However, at the moment when we queue the active-connection for
activation, we also want to make sure that the keep-alive instance is
armed. It is nicer for consistancy reasons.
Note, that nm_keep_alive_arm() has no effect if nm_keep_alive_disarm()
was called earlier already. Also note, that NMActiveConnection will
disarm the keep-alive instance, when changing to a state greater than
ACTIVATED. So, all works together nicely.
Also, no longer arm the keep-alive instance in the constructor of
NMActiveConnection. It would essentially mean, that the instances
is aremd very early.
Also, as alternative point of interest, arm the keep-alive instance
when registering the signal handler in "nm-policy.c".
NMKeepAlive supports conditions (watches, bindings) that determine
whether the instance signals alive state. Currently, there are two
conditions: watch a VISIBLE state of a connection and watch whether a
D-Bus client is registered.
Anyway, if none of these conditions are enabled, then the default should
be that the instance is alive. If one or more conditions are enabled,
then they cooprate in a particular way, depending on whether the
condition is enabled and whether it is satisfied.
Previously, the code couldn't differenciate whether a D-Bus watch was
not enabled or whether the D-Bus client already disconnected. It would
just check whether priv->dbus_client was set, but that alone was not
sufficient to differentiate. That means, we couldn't determine whether
the keep-alive instance is alive (by default, as no D-Bus watch is
enabled) or whether the keep-alive instance is dead (because the D-Bus
client disconnected). Fix that, by tracking explicitly whether to watch
based on a priv->dbus_client_watching flag.
Note, that problems by this were avoided earlier, by only arming the
instance when enabling a condition. But I think the concept of arming the
keep-alive instance merely means that the instance is ready to fire
events. It should not mean that the user only arms the instance after
registering a condition.
Also, without this, we couldn't unbind the NMKeepAlive instance from all
conditions, without making it dead right away. Unbinding/disabling all
conditions should render the instance alive, not dead.
It's not clear that calling nm_manager_deactivate_connection() does not
remove the active-connection entirely from the list. Just to be sure, use
nm_manager_for_each_active_connection_safe() which allows deleting the
current entry while iterating (all other modifications to the list are not
allowed).
set-forced is currently unused, so drop it.
NMKeepAlive in principle determines the alive-status based on multiple
aspects, that in combination render the instance alive or dead. These
aspects cooperate in a particular way.
By default, a keep-alive instance should be alive. If there are conditions
enabled that further determine the alive-state, then these conditions
cooperate in a particular way. As it was, the force-flag would just
overrule them all.
But is that useful? The nm_keep_alive_set_forced() API also means that only
one user caller can have control over the flag. Independent callers cannot
cooperate on setting the flag, because there is no reference-counting or
registered handles.
At least today, it's unclear whether this flag really should overrule all
other conditions and how this flag would actually be used. Drop it for
now.
NMKeepAlive is a proper GObject type, with a specific API that on the one
end allows to configure watches/bindings, and on the other end exposes
and is-alive property and the owner instance. That's great, as NMActiveConnection
is not concerned with either end (moving complexity away from
"nm-active-connection.c") and as we later can reuse NMKeepAlive with
NMSettingsConnection.
However, we don't need to wrap this API by NMActiveConnection. Doing so
means, we need to expose all the watch/bind functions also as part of
NMActiveConnection API.
The only ugliness here is, that NMPolicy subscribes to property changed
signal of the keep alive instance, which would fail horribly if
NMActiveConnection ever decides to swap the keep alive instance (in
which case NMPolicy would have to disconnect the signal, and possibly
reconnect it to another NMKeepAlive instance). We avoid that by just not
doing that and documenting it.
NMKeepAlive is a full API independent of NMActiveConnection. For good
reasons:
- it moves complexity away from nm-active-connection.c
- in the future, we can use NMKeepAlive also for NMSettingsConnection
As such, the user should also directly interact with NMKeepAlive,
instead of going through NMActiveConnection. For that to work, it
must be possible to get the owner of the NMKeepAlive instance,
which is kept alive.
We should not blindly change the device's state. Instead, call
nm_device_disconnect_active_connection() which will figure out whether
the device state needs to change. Note that it is very possible that
the active-connection instance is still queued, not yet queued, or
already disconnected. nm_device_disconnect_active_connection() does
the right thing in all cases.
Previously, if @active referenced a device but was not currently queued
or the current activation request, nothing was done.
Now, in such a case still call nm_active_connection_set_state_fail().
Note that nm_active_connection_set_state_fail() has no effects on
active-connections that are already in disconnected state (which
we would expect by such an active connection). Likely there is no
visible change here, but it feels more correct to ensure the active
connection is always failed.