Our NML3Cfg instance is the IP configuration manage of one ifindex.
Often users have an NML3Cfg instance at hand, but they still need to
react to platform signals. Instead of requiring those users to register
their own signal (which also gets notifications about uninteresting
interfaces), re-emit the signal from NML3Cfg.
We already had NM_L3_CONFIG_NOTIFY_TYPE_PLATFORM_CHANGE_ON_IDLE which
does something similar, but collects multiple changes and emits them
on an idle handler.
Run:
./contrib/scripts/nm-code-format.sh -i
./contrib/scripts/nm-code-format.sh -i
Yes, it needs to run twice because the first run doesn't yet produce the
final result.
Signed-off-by: Antonio Cardace <acardace@redhat.com>
Currently all NMDevice instance register to the platform change signals,
then if a signal for their IP ifindex appears, they schedule a task on
an idle handler. That is wasteful.
NML3Cfg already gets a notification on an idle handler and can just re-emit
it to the respective listeners.
With this, there is only one subscriber to the platform signals (NMNetns)
which then multiplexes the signals to the right NML3Cfg instances, and
further.
Add a better way of tracking the shared IP addresses that are in use.
This will replace NMDevice's usage of a global hash table. For one, the
API is more formalized with reserve() and release() functions.
Also, it ties the used IP addresses to the netns, which would be more
correct (in the future when we may support more netns).
We want to allow the user to externally remove IP addresses
and routes, and NetworkManager not re-adding them until a full reapply
happens. For that, we need to keep track of IP addresses that were
present, but no longer are.
The NML3Cfg instance tracks and prepares the IP configuration.
However, that is also partly exposed on other objects, like
NMIP4Config's "route-data" property.
Add an API, so that NMIP4Config can register itself to be notified
when something relevant changes.
This is an alternative to standard GObject properties and signals. They
often seem more effort than worth. That is, because in this case,
NMIP4Config.route-data has no other task then to re-emit the signal.
So, to implement that with GObject properties/signals, we would have to
add a property/signal to NML3Cfg, subscribe to it from NMIP4Config,
and remit the signal. An alternative is to bind properties, but that
would still be quite some extra code, and unclear that it would be
simpler. Not to mention the overhead, as bindings are themself full
GObject instances, that register to and emit signals by name.
We need to react to platform changes. Also, we usually want to delay the
reaction to an idle handler.
Instead of subscribing each NML3Cfg instance itself to platform changes,
let only NMNetns do that. The goal is of course that each platform event
only needs to notify the NML3Cfg instance, which collects the events and
schedules them on the idle handler.
IP addresses, routes, TC and QDiscs are all tied to a certain interface.
So when NetworkManager manages an interface, it can be confident that
all related entires should be managed, deleted and modified by NetworkManager.
Routing policy rules are global. For that we have NMPRulesManager which
keeps track of whether NetworkManager owns a rule. This allows multiple
connection profiles to specify the same rule, and NMPRulesManager can
consolidate this information to know whether to add or remove the rule.
NMPRulesManager would also support to explicitly block a rule by
tracking it with negative priority. However that is still unused at
the moment. All that devices do is to add rules (track with positive
priority) and remove them (untrack) once the profile gets deactivated.
As rules are not exclusively owned by NetworkManager, NetworkManager
tries not to interfere with rules that it knows nothing about. That
means in particular, when NetworkManager starts it will "weakly track"
all rules that are present. "weakly track" is mostly interesting for two
cases:
- when NMPRulesManager had the same rule explicitly tracked (added) by a
device, then deactivating the device will leave the rule in place.
- when NMPRulesManager had the same rule explicitly blocked (tracked
with negative priority), then it would restore the rule when that
block gets removed (as said, currently nobody actually does this).
Note that when restarting NetworkManager, then the device may stay and
the rules kept. However after restart, NetworkManager no longer knows
that it previously added this route, so it would weakly track it and
never remove them again.
That is a problem. Avoid that, by whenever explicitly tracking a rule we
also make sure to no longer weakly track it. Most likely this rule was
indeed previously managed by NetworkManager. If this was really a rule
added by externally, then the user really should choose distinct
rule priorities to avoid such conflicts altogether.
We no longer add these. If you use Emacs, configure it yourself.
Also, due to our "smart-tab" usage the editor anyway does a subpar
job handling our tabs. However, on the upside every user can choose
whatever tab-width he/she prefers. If "smart-tabs" are used properly
(like we do), every tab-width will work.
No manual changes, just ran commands:
F=($(git grep -l -e '-\*-'))
sed '1 { /\/\* *-\*- *[mM]ode.*\*\/$/d }' -i "${F[@]}"
sed '1,4 { /^\(#\|--\|dnl\) *-\*- [mM]ode/d }' -i "${F[@]}"
Check remaining lines with:
git grep -e '-\*-'
The ultimate purpose of this is to cleanup our files and eventually use
SPDX license identifiers. For that, first get rid of the boilerplate lines.
From the files under "shared/nm-utils" we build an internal library
that provides glib-based helper utilities.
Move the files of that basic library to a new subdirectory
"shared/nm-glib-aux" and rename the helper library "libnm-core-base.la"
to "libnm-glib-aux.la".
Reasons:
- the name "utils" is overused in our code-base. Everything's an
"utils". Give this thing a more distinct name.
- there were additional files under "shared/nm-utils", which are not
part of this internal library "libnm-utils-base.la". All the files
that are part of this library should be together in the same
directory, but files that are not, should not be there.
- the new name should better convey what this library is and what is isn't:
it's a set of utilities and helper functions that extend glib with
funcitonality that we commonly need.
There are still some files left under "shared/nm-utils". They have less
a unifying propose to be in their own directory, so I leave them there
for now. But at least they are separate from "shared/nm-glib-aux",
which has a very clear purpose.
(cherry picked from commit 80db06f768)
Policy routing rules are global, and unlike routes not tied to an interface by ifindex.
That means, while we take full control over all routes of an interface during a sync,
we need to consider that multiple parties can contribute to the global set of rules.
That might be muliple connection profiles providing the same rule, or rules that are added
externally by the user. NMPRulesManager mediates for that.
This is done by NMPRulesManager "tracking" rules.
Rules that are not tracked by NMPRulesManager are completely ignored (and
considered externally added).
When tracking a rule, the caller provides a track-priority. If multiple
parties track a rule, then the highest (absolute value of the) priority
wins.
If the highest track-priority is positive, NMPRulesManager will add the rule if
it's not present.
When the highest track-priority is negative, then NMPRulesManager will remove the
rule if it's present (enforce its absence).
The complicated part is, when a rule that was previously tracked becomes no
longer tracked. In that case, we need to restore the previous state.
If NetworkManager added the rule earlier, then untracking the rule
NMPRulesManager will remove the rule again (restore its previous absent
state).
By default, if NetworkManager had a negative tracking-priority and removed the
rule earlier (enforced it to be absent), then when the rule becomes no
longer tracked, NetworkManager will not restore the rule.
Consider: the user adds a rule externally, and then activates a profile that
enforces the absence of the rule (causing NetworkManager to remove it).
When deactivating the profile, by default NetworkManager will not
restore such a rule! It's unclear whether that is a good idea, but it's
also unclear why the rule is there and whether NetworkManager should
really restore it.
Add weakly tracked rules to account for that. A tracking-priority of
zero indicates such weakly tracked rules. The only difference between an untracked
rule and a weakly tracked rule is, that when NetworkManager earlier removed the
rule (due to a negative tracking-priority), it *will* restore weakly
tracked rules when the rules becomes no longer (negatively) tracked.
And it attmpts to do that only once.
Likewise, if the rule is weakly tracked and already exists when
NMPRulesManager starts posively tracking the rule, then it would not
remove again, when no longer positively tracking it.
All that setting track-default does, is calling nmp_rules_manager_track_default()
when the rules are first accessed.
That is not right API. Since nmp_rules_manager_track_default() is already public
API (good), every caller that wishes this behavior should track these routes explicitly.
Routing rules are unlike addresses or routes not tied to an interface.
NetworkManager thinks in terms of connection profiles. That works well
for addresses and routes, as one profile configures addresses and routes
for one device. For example, when activating a profile on a device, the
configuration does not interfere with the addresses/routes of other
devices. That is not the case for routing rules, which are global, netns-wide
entities.
When one connection profile specifies rules, then this per-device configuration
must be merged with the global configuration. And when a device disconnects later,
the rules must be removed.
Add a new NMPRulesManager API to track/untrack routing rules. Devices can
register/add there the routing rules they require. And the sync method will
apply the configuration. This is be implemented on top of NMPlatform's
caching API.
Remove NMDefaultRouteManager. Instead, add the default-route to the
NMIP4Config/NMIP6Config instance.
This basically reverts commit e8824f6a52.
We added NMDefaultRouteManager because we used the corresponding to `ip
route replace` when configuring routes. That would replace default-routes
on other interfaces so we needed a central manager to coordinate routes.
Now, we use the corresponding of `ip route append` to configure routes,
and each interface can configure routes indepdentently.
In NMDevice, when creating the default-route, ignore @auto_method for
external devices. We shall not touch these devices.
Especially the code in NMPolicy regarding selection of the best-device
seems wrong. It probably needs further adjustments in the future.
Especially get_best_ip_config() should be replaced, because this
distinction VPN vs. devices seems wrong to me.
Thereby, remove the @ignore_never_default argument. It was added by
commit bb75026004, I don't think it's
needed anymore.
This brings another change. Now that we track default-routes in
NMIP4Config/NMIP6Config, they are also exposed on D-Bus like regular
routes. I think that makes sense, but it is a change in behavior, as
previously such routes were not exposed there.
Previously, we would add exclusive routes via netlink message flags
NLM_F_CREATE | NLM_F_REPLACE for RTM_NEWROUTE. Similar to `ip route replace`.
Using that form of RTM_NEWROUTE message, we could only add a certain
route with a certain network/plen,metric triple once. That was already
hugely inconvenient, because
- when configuring routes, multiple (managed) interfaces may get
conflicting routes (multihoming). Only one of the routes can be actually
configured using `ip route replace`, so we need to track routes that are
currently shadowed.
- when configuring routes, we might replace externally configured
routes on unmanaged interfaces. We should not interfere with such
routes.
That was worked around by having NMRouteManager (and NMDefaultRouteManager).
NMRouteManager would keep a list of the routes which NetworkManager would like
to configure, even if momentarily being unable to do so due to conflicting routes.
This worked mostly well but was complicated. It involved bumping metrics to
avoid conflicts for device routes, as we might require them for gateway routes.
Drop that now. Instead, use the corresponding of `ip route append` to configure
routes. This allows NetworkManager to confiure (almost) all routes that we care.
Especially, it can configure all routes on a managed interface, without
replacing/interfering with routes on other interfaces. Hence, NMRouteManager
becomes obsolete.
It practice it is a bit more complicated because:
- when adding an IPv4 address, kernel will automatically create a device route
for the subnet. We should avoid that by using the IFA_F_NOPREFIXROUTE flag for
IPv4 addresses (still to-do). But as kernel may not support that flag for IPv4
addresses yet (and we don't require such a kernel yet), we still need functionality
similar to nm_route_manager_ip4_route_register_device_route_purge_list().
This functionality is now handled via nm_platform_ip4_dev_route_blacklist_set().
- trying to configure an IPv6 route with a source address will be rejected
by kernel as long as the address is tentative (see related bug rh#1457196).
Preferably, NMDevice would keep the list of routes which should be configured,
while kernel would have the list of what actually is configured. There is a
feed-back loop where both affect each other (for example, when externally deleting
a route, NMDevice must forget about it too). Previously, NMRouteManager would have
the task of remembering all routes which we currently want to configure, but cannot
due to conflicting routes.
We get rid of that, because now we configure non-exclusive routes. We however still
will need to remember IPv6 routes with a source address, that currently cannot be
configured yet. Hence, we will need to keep track of routes that
currently cannot be configured, but later may be.
That is still not done yet, as NMRouteManager didn't handle this
correctly either.
NMIP4Config, NMIP6Config, and NMPlatform shall share one
NMDedupMultiIndex instance.
For that, pass an NMDedupMultiIndex instance to NMPlatform and NMNetns.
NMNetns than passes it on to NMDevice, NMDhcpClient, NMIP4Config and NMIP6Config.
So currently NMNetns is the access point to the shared NMDedupMultiIndex
instance, and it gets it from it's NMPlatform instance.
The NMDedupMultiIndex instance is really a singleton, we don't want
multiple instances of it. However, for testing, instead of adding a
singleton instance, pass the instance explicitly around.
This also ensures that we own a reference to the
NMPlatform, NMRouteManager and NMDefaultRouteManager
instances. See bug rh#1440089 where we might access
the singleton getter after destroing the singleton
instance of NMRouteManager. This is prevented by
keeping a reference to those instances -- indirectly
via the netns instance.
Later, we may add support for multiple namespaces. Then it might
make sense to swap the NMNetns instance of a device when moving
the device between namespaces.
Also, drop the use of singelton instances.
https://bugzilla.redhat.com/show_bug.cgi?id=1440089
(cherry picked from commit c48a19b7c6)
NMPlatform, NMRouteManager and NMDefaultRouteManager are singletons
instances. Users of those are for example NMDevice, which registers
to GObject signals of both NMPlatform and NMRouteManager.
Hence, as NMDevice:dispose() disconnects the signal handlers, it must
ensure that those singleton instances live longer then the NMDevice
instance. That is usually accomplished by having users of singleton
instances own a reference to those instances.
For NMDevice that effectively means that it shall own a reference to
several singletons.
NMPlatform, NMRouteManager, and NMDefaultRouteManager are all
per-namespace. In general it doesn't make sense to have more then
one instances of these per name space. Nnote that currently we don't
support multiple namespaces yet. If we will ever support multiple
namespaces, then a NMDevice would have a reference to all of these
manager instances. Hence, introduce a new class NMNetns which bundles
them together.
(cherry picked from commit 0af2f5c28b)