It probably was no problem in practice, because very likely the
chunk of memory was aligned already.
Also, drop non helpful comment and fix whitespace.
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
The main issue is that `nmcli networking connectivity check` uses
nm_client_check_connectivity(), which has a timeout of 25 seconds.
Using a timeout of 30 seconds server side, means that if the requests
don't complete in time, the client side will time out and abort
with a failure. That is not right.
Fix that by using a shorter timeout server side. 20 seconds is still
plenty for a small HTTP request. If the network takes longer than that,
it's fair to call that LIMITED connectivity.
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281b
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
Add a helper function to cache the current timestamp and return
it. The caching is a performance optimization, but it serves a
much more important purpose: repeatedly getting the timestamp
likely will yield different timings. So, commonly, within a
certain context we want to get the current time once, and stick
to that as "now".
Since commit 78ed0a4a23 (device: add
IPv6 link local address via merge-and-apply) we handle also IPv6 link
local addresses like regular addresses. That is, we also add them during
merge-and-apply and sync them via nm_platform_ip6_address_sync().
ip6-address-sync loops over the platform addresses, to find which
addresses shall be deleted, and which shall be deleted in order to
fix the address order/priority. At that point, we must not ignore
link-local addresses anymore, but handle them too.
Otherwise, during each resync we have link local addresses, and
platform-sync thinks that the address order is wrong. That wrongly
leads to remove most addresses and re-adding them.
Fixes: 78ed0a4a23
For completeness, extend the API to support non-persistant
device. That requires that nm_platform_link_tun_add()
returns the file descriptor.
While NetworkManager doesn't create such devices itself,
it recognizes the IFLA_TUN_PERSIST / IFF_PERSIST flag.
Since ip-tuntap (obviously) cannot create such devices,
we cannot add a test for how non-persistent devices look
in the platform cache. Well, we could instead add them
with ioctl directly, but instead, just extend the platform
API to allow for that.
Also, use the function from test-lldp.c to (optionally) use
nm_platform_link_tun_add() to create the tap device.
Previously, it was not (reliably) possible to use nmtstp_wait_for_link*() to
only look into the platform cache, without trying to poll the netlink
socket for events.
Add this option. Now, if the timeout is specified as zero, we never actually
read the netlink socket.
Currently, there are no callers who make use of this (by passing
a zero timeout). So, this is no change in existing behavior.
Implement nmtstp_assert_wait_for_link() and nmtstp_assert_wait_for_link_until()
as macros, based on nmtst_assert_nonnull().
This way, the assertion will report a more helpful file:line location,
instead of being somewhere nested inside test-common.c.
Due to a bug, the current rc-kernel will emit the first netlink
notification about tun devices before the device is initialized.
Hence, the content of the message is bogus. If the message
looks like to be this case, re-request it right away.
Now that kernel supports providing information about tun/tap devices
via netlink, make use of it.
Also, enable the hack that:
- when we first see a link that has no lnk data, we refetch
it on the assumption, that kernel just didn't send it
the first time.
For old kernels that do not yet support tun properties on netlink,
this means that we will always refetch the link once, the first
time we see it. I think that is acceptable, and the more correct
behavior for newer kernels that do support it.
Rework the code to if-else-if, to not schedule the same
DELAYED_ACTION_TYPE_REFRESH_LINK instance multiple times.
Note that delayed_action_schedule() already would check that
no duplicates are scheduled, but we can avoid that.
It doesn't make sense that NetworkManager adds non-persist tun
devices, likewise, only the type IFF_TUN or IFF_TAP is supported.
Assert that the values are as expected.
Switch from "pi on|off" to optinally printing "pi" to indicate
whether the flag is set. That follows ip-tuntap syntax and is
more familiar:
$ ip tuntap help
Usage: ip tuntap { add | del | show | list | lst | help } [ dev PHYS_DEV ]
[ mode { tun | tap } ] [ user USER ] [ group GROUP ]
[ one_queue ] [ pi ] [ vnet_hdr ] [ multi_queue ] [ name NAME ]
Where: USER := { STRING | NUMBER }
GROUP := { STRING | NUMBER }
Also, print the "persist" flag.
Specify a reason when creating active connections. The reason will be
used in the next commit to tell whether slaves must be reconnected or
not if a master has autoconnect-slaves=yes.
- instead of allocating memory separately for the @tag (key)
and ChainData (data), store the tag also inside ChainData.
- instead of adding two separate key and value items to GHashTable,
use g_hash_table_add(), which is optimized for this case.
auth_call_complete() had two callers: once from the idle handler, and
once from pk_call_cb(). The conditions are slightly different, split
the function in two. For one, this allows to unset the obsolete
call_idle_id.
NM_CONTROLLED=no has the primary use of marking devices as unmanaged.
For that to work, the ifcfg file must contain either a MAC address,
an interface-name, or s390-subchannels that match a device.
In case the profile doesn't contain such specifiers, the profile
is ignored and a warning was logged:
<warn> [1522849679.7866] ifcfg-rh: loading "/etc/sysconfig/network-scripts/ifcfg-ens99" fails: NM_CONTROLLED was false but device was not uniquely identified; device will be managed
Downgrade this warning to a debug message. It's not unreasonable
that a user marks a ifcfg file with NM_CONTROLLED=no, to avoid
NetworkManager handling it. Yes, that way, the user did not explicitly
mark a device as unmanaged. But NetworkManager will ignore the profile,
as the user might resonably desire. No need to warn about that.
Kernel does not all allow to configure a route via a gateway, if the
gateway is not directly reachable.
For non-manually added routes (e.g. from DHCP), we ignore them as a
server configuration errors. For manually added routes, we try to work
around them.
Note that if the user adds a manual route that references a gateway,
maybe he should be required to also add a matching onlink route for
the gateway (or an address that results in a device-route), otherwise
the configuration could be considered invalid. That was however not
done historically, and also, it seems a rather unhelpful behavior.
NetworkManage should just make it work, not not assume anything is
wrong with the configuration. Similarly, for IPv4, the user could
configure the route as onlink, however, that still requires extra
configuration of which the user might not be aware.
This would apply for example, when a connection has method=auto,
and would obtain the routes automatically. It seems sensible to
allow the user to add a route via the gateway, if he ~knows~ that
this particular network will provide such a configuration via DHCP.
In the past however, we tried not to automatically add a device route,
but instead see whether we will get a suitable route via DHCP. If we
wouldn't get such a route, we would however fail the connection.
However, this is really very hard to get right.
We call ip_config_merge_and_apply() possibly before receiving automatic
IP configuration (commit 7070d17ced, "device: reset
@con_ip6_config on failure before RA"). In this case, we could not yet
configure the route. Instead, we also cannot fail (yet), because we should
wait whether we will receive a route that makes this configuration
feasable.
That is hard to get right. How long should we wait? If we get a DHCP lease
and still cannot add the route, should we fail the IP configuration or wait
longer for another lease? Worse, if we decide to fail the IP configuration,
it might not fail the entire activation. Instead, we would only mark the
current address family as failed. If we later get a DHCP lease, should we
retry to add the route again? -- probably yes. If we still fail, we would
need to keep the IP configuration in failed state, regardless that DHCP
succeeded. Part of the problem is, that we are bad at tracking the
failed state per IP method. So, if manual configuration fails but DHCP
succeeds, we get the state wrong. That should be fixed separately, but it
just shows how hard it is to have this route that we currently cannot
add, and wanting to wait for something that might never come, but still
fail at some point.
Instead, if we cannot add a route due to a missing onlink gateway,
just retry and add the /32 or /128 direct route ourself.
Note that for IPv6 routes that have a "src" address which is still
TENTATIVE, we also cannot currently add the route and retry later.
However, that is fundamentally different, because:
- the configuration here is correct, it's only that the address
didn't yet pass IPv6 DAD and kernel is being unhelpful (rh#1457196).
- we only have to wait a few seconds for DAD to complete or fail.
So, it's easy to implement this sensibly.
The device must not directly add addresses or routes. Instead,
it must track the addresses/routes it wants to add in the NMIP6Config.
Otherwise, during reapply, the information is lost and the next
sync will remove them.
Fixes-test: @ipv6_preserve_cached_routes
We already have IP4LL and maybe should re-use that also for IPv6.
However, when adding the prefix route for IPv6 link local addresses,
we want to add it with protocol "kernel", unlike "user" for IPv4.
There is no strong reason for this. I don't think the protocol matters
much. But up to now kernel automatically adds this prefix route, so
as we are going to change that and let NetworkManager add it, keep the
protocol at "kernel".
- indent with spaces for wrapped line
- drop g_return_val_if_fail(). It really shouldn't happen,
and if it does, we can just continue and will hit g_critical()
warnings below, when using the variables.
- get the NMActRequest instance first, and from there the applied
and settings connection. Not the other way around.
nm_device_get_settings_connection() and nm_device_get_applied_connection()
are just convenience functions to get the fields from
act_request.
This allows to adjust the timeout of an existing checkpoint.
The main usecase of checkpoints, is to have a fail-safe when
configuring the network remotely. By allowing to reset the timeout,
the user can perform a series of actions, and keep bumping the
timeout. That way, the entire series is still guarded by the same
checkpoint, but the user can start with short timeout, and
re-adjust the timeout as he goes along.
The libnm API only implements the async form (at least for now).
Sync methods are fundamentally wrong with D-Bus, and it's probably
not needed. Also, follow glib convenction, where the async form
doesn't have the _async name suffix. Also, accept a D-Bus path
as argument, not a NMCheckpoint instance. The libnm API should
not be more restricted than the underlying D-Bus API. It would
be cumbersome to require the user to lookup the NMCheckpoint
instance first, especially since libnm doesn't provide an efficient
or convenient lookup-by-path method. On the other hand, retrieving
the path from a NMCheckpoint instance is always possible.
Introduce a new flag NM_CHECKPOINT_CREATE_FLAG_ALLOW_OVERLAPPING
that allows the creation of overlapping checkpoints. Before, and
by default, checkpoints that reference a same device conflict,
and creating such a checkpoint failed.
Now, allow this. But during rollback automatically destroy all
overlapping checkpoints that were created after the checkpoint
that is about to rollback.
With this, you can create a series of checkpoints, and rollback them
individually. With the restriction, that if you once rolled back to an
older checkpoint, you no longer can roll"forward" to a younger one.
What this implies and what is new here, is that the checkpoint might be
automatically destroyed by NetworkManager before the timeout expires. When
the user later would try to manually destroy/rollback such a checkpoint, it
would fail because the checkpoint no longer exists.
We already do error checking in nm_checkpoint_manager_create(). No need
to split it in two places. Let all error conditions be handled by
nm_checkpoint_manager_create() first, and then once we decide all is
good, nm_checkpoint_new() can no longer fail.
Instead of scheduling one timeout only, let each checkpoint instance
individually schedule a timeout. This has some overhead, but glib
is supposed to make scheduling many timers efficient. Otherwise,
glib should be fixed.
This simplifies in my opinion the code, because it's up to each
checkpoint to maintain its own timeout.
Later we will also add a AdjustRollbackTimeout operation, which
allow to reschedule the timeout. It also seems slightly simpler,
if scheduling of the timeout is done by the NMCheckpoint instance
itself.