Supporting PolicyKit required no additional library, just extra code
to handle the D-Bus calls. For that, there was a compile time option
to even stip out that code. Note, that you could (and still can)
configure the system not to use policy-kit. The point was to reduce
the binary size in case you don't need it.
Remove this. I guess, we we aim for such aggressive optimization of
the binary size, we should instead make all device types disablable
at configuration time. We don't do that either and other low hanging
fruits, because it's better to always enable features, unless they
require external dependencies.
Also, the next commit will make more use of NMAuthManager. So, having
it disabled at compile time, makes even less sense.
Don't use the GAsyncResult pattern for internal API of auth-manager. Instead,
use a simpler API that has a more strict API and simpler use.
- return a call-id handle when scheduling the authorization request.
The request is always scheduled asynchronsously and thus call-id
is never %NULL.
- the call-id can be used to cancel the request. It can be used exactly
once, and only before the callback is invoked.
- the async keeps the auth-manager alive. It needs to do so, because
when cancelling the request we might not yet be done: instead we
might still need to issue a CancelCheckAuthorization call (which
we need to handle as well).
- the callback is always invoked exactly once.
Currently NMAuthManager's API effectivly is only called by NMAuthChain.
The point of this is to make NMAuthManager's API more consumable, and
thus let users use it directly (instead of using the NMAuthChain layer).
As well known, we don't do a good job during shutdown of NetworkManager
to release all resources and cancel pending requests. This rework also
makes it possible to actually get this right. See the comment in
nm_auth_manager_force_shutdown(). But yes, it is still a bit complicated
to do a controlled shutdown, because we cannot just synchronously
complete. We need to issue CancelCheckAuthorization D-Bus calls, and
give these requests time to complete. The new API introduced by this patch
would make that easier.
We need the setter, because we want that the property is set only
once during creation of the instance. Nobody cares about the GObject
property getter otherwise.
It's more efficient, as it saves a lookup by name. Also,
it's more idiomatic to do it this way. I didn't find where
the signal gets emitted at first, because usually we don't emit
by name.
NMAuthChain schedules (possibly) multiple authentication requests.
When they all complete, it will once invoke the result-callback.
There is no need to schedule this result-callback on another idle-handler,
because nm_auth_manager_polkit_authority_check_authorization() should guarantee
to invoke the callback never-synchronously and on a clean call-stack (to avoid
problems with re-entrancy). At that point, NMAuthChain does not need to
delay this further.
NMAuthChain is not really ref-counted. True, we have an internal ref-counter
to ensure that the instance stays alive while the callback is invoked. However,
the user cannot take additional references as there is no nm_auth_chain_ref().
When the user wants to get rid of the auth-chain, with the current API it
is important that the callback won't be called after that point. From the
name nm_auth_chain_unref(), it sounds like that there could be multiple references
to the auth-chain, and merely unreferencing the object might not guarantee that
the callback is canceled. However, that is luckily not the case, because
there is no real ref-counting involved here.
Just rename the destroy function to make this clearer.
When one D-Bus object exposes (the path of) another D-Bus object,
we want that the path property gets cleared before the other
object gets unexported(). That essentially requires to register
to the "exported-changed" signal.
Add a helper struct NMDBusTrackObjPath to help with this.
When a settings-connection gets deleted, we need to bring down the
NMActiveConnection that contains it. However, we shouldn't just unexport
the active connection from D-Bus. Instead, clear the settings path.
We need to drop the path, because the connection is going away. It's a
bit ugly, that an active-connection might reference no
settings-connection. However, this only happens during shut-down.
The alternative, would be to keep the settings-connection object
in a zombie state, exported on D-Bus. However, that seems even more
confusing to me.
The GObject property "path" exists for the sole reasons so that
other components can connect to the "notify::path" signal.
However, notifications are blocked by g_object_freeze_notify(),
and especially for NMDBusObject we want to make use of that to
combine multiple PropertiesChanged events into one.
This blocking of the signal is not desired for the case where
we wait for "notify::path". Convert that to a signal, which
will not be blocked.
NMSettings already references NMSettingsConnection. Hence, it should not
at the same time reference itself. Arguably, during shutdown we do not properly
release all NMSettingsConnection. For example, there is no nm_settings_stop().
But that is a bug that needs fixing.
No need to keep the NMSettings instance alive here. If this is really
necessary, it needs fixing somewhere else. Besides, we know that we leak
a lot during shutdown, so this needs more work to do a clean shutdown.
No longer rely on nm_connection_get_path() being meaningful in server.
It also was wrong. During update, nm_settings_connection_update()
would call
nm_utils_log_connection_diff (replace_connection, NM_CONNECTION (self), ...
where replace_connection has no path set, and nothing was logged.
Fix it, by explicitly passing the D-Bus path. Also, because
nm-core-utils.c should be independent of nm-dbus-object.h.
Essentially, nm_connection_get_path() mirros nm_dbus_object_get_path().
However, when cloning a simple-connection, the path also gets cloned.
I think this field doesn't belong to NMConnection in the first place,
because NMConnection is not a D-Bus object. NMSettingsConnection (in
core) and NMRemoteConnection (in libnm) is.
Don't use the misleading alias, but use nm_dbus_object_get_path()
directly.
This wasn't a problem, because load_plugins() can only fails
if the settings plugins fail to load, which can only happen
if you have a broken installation.
Don't subscribe twice to the same signal. The more subscribers a
signal has, the more confusing it gets what is happening.
We can handle also the default-wired-connection in the regular
connection-removed signal.
Note how connection_removed() is registered with
g_signal_connect_after(), but that is fine. There are few subscribers
to this signal (that don't do anything that interferes here).
Especially, since all other subscribers subscribe with the same
priority (hence, are unordered). So, moving this task explicitly
to after, does not change any ordering guarantee -- in fact, it
ensures an ordering that was undefined previously. Anyway, it
doesn't matter.
Currently we overwrite the interface rp_filter value with 2 ("loose")
only when it is 1 ("strict") because when it is 0 ("no validation") it
is already more permissive.
So, if the value for the interface is 0 and
net/ipv4/conf/all/rp_filter is 1 (like it happens by default on Fedora
28), we don't overwrite it; since kernel considers the maximum between
{all,$dev}/rp_filter, the effective value remains 'strict'.
We should instead combine the two {all,$dev}/rp_filter, and if it's 1
overwrite the value with 2.
https://bugzilla.redhat.com/show_bug.cgi?id=1565529
There are multiple tests with the same in different directories; add a
unique prefix to test names so that it is clear from the output which
one is running.
Otherwise, if connectivity checking was disabled, we would never
reset the connectivity state and leave it wrongly at UNKNOWN.
nm_device_check_connectivity_update_interval() is already called
during state-changes, so this is the right place. However,
it's far from perfect still, because we might not notice when
a default-route gets added or removed. Also, devices that are not
in ACTIVATED state, are considered with connectivity NONE. Which
might not be correct.
Fixes: 0a62a0e903
The fake states still encode whether the device have a default-route.
So, they are not entirely useless. Also, don't add special handling
of "#if !WITH_CONCHECK" where we don't need it.
There is no fundamental difference between compiling without connectivity
check and disabling connectivity checks.
NMManager very much cares about changes to the connectivity state
of the device and was therefore listening to notify::connectivity
signals. However, property changed signals can be suppressed by
g_object_freeze_notify(). That is something we even encourage for
NMDBusObject instances, because the D-Bus glue makes use of the
property changed notifications, and encourages to combine multiple
changes by freezing the signal.
Using the property changed notifications of NMDBusObject instances is
ugly. Don't do that and instead add a special signal.
It probably was no problem in practice, because very likely the
chunk of memory was aligned already.
Also, drop non helpful comment and fix whitespace.
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240