Background
==========
Imagine you run a container on your machine. Then the routing table
might look like:
default via 10.0.10.1 dev eth0 proto dhcp metric 100
10.0.10.0/28 dev eth0 proto kernel scope link src 10.0.10.5 metric 100
[...]
10.42.0.0/24 via 10.42.0.0 dev flannel.1 onlink
10.42.1.2 dev cali02ad7e68ce1 scope link
10.42.1.3 dev cali8fcecf5aaff scope link
10.42.2.0/24 via 10.42.2.0 dev flannel.1 onlink
10.42.3.0/24 via 10.42.3.0 dev flannel.1 onlink
That is, there are another interfaces with subnets and specific routes.
If nm-cloud-setup now configures rules:
0: from all lookup local
30400: from 10.0.10.5 lookup 30400
32766: from all lookup main
32767: from all lookup default
and
default via 10.0.10.1 dev eth0 table 30400 proto static metric 10
10.0.10.1 dev eth0 table 30400 proto static scope link metric 10
then these other subnets will also be reached via the default route.
This container example is just one case where this is a problem. In
general, if you have specific routes on another interface, then the
default route in the 30400+ table will interfere badly.
The idea of nm-cloud-setup is to automatically configure the network for
secondary IP addresses. When the user has special requirements, then
they should disable nm-cloud-setup and configure whatever they want.
But the container use case is popular and important. It is not something
where the user actively configures the network. This case needs to work better,
out of the box. In general, nm-cloud-setup should work better with the
existing network configuration.
Change
======
Add new routing tables 30200+ with the individual subnets of the
interface:
10.0.10.0/24 dev eth0 table 30200 proto static metric 10
[...]
default via 10.0.10.1 dev eth0 table 30400 proto static metric 10
10.0.10.1 dev eth0 table 30400 proto static scope link metric 10
Also add more important routing rules with priority 30200+, which select
these tables based on the source address:
30200: from 10.0.10.5 lookup 30200
These will do source based routing for the subnets on these
interfaces.
Then, add a rule with priority 30350
30350: lookup main suppress_prefixlength 0
which processes the routes from the main table, but ignores the default
routes. 30350 was chosen, because it's in between the rules 30200+ and
30400+, leaving a range for the user to configure their own rules.
Then, as before, the rules 30400+ again look at the corresponding 30400+
table, to find a default route.
Finally, process the main table again, this time honoring the default
route. That is for packets that have a different source address.
This change means that the source based routing is used for the
subnets that are configured on the interface and for the default route.
Whereas, if there are any more specific routes in the main table, they will
be preferred over the default route.
Apparently Amazon Linux solves this differently, by not configuring a
routing table for addresses on interface "eth0". That might be an
alternative, but it's not clear to me what is special about eth0 to
warrant this treatment. It also would imply that we somehow recognize
this primary interface. In practise that would be doable by selecting
the interface with "iface_idx" zero.
Instead choose this approach. This is remotely similar to what WireGuard does
for configuring the default route ([1]), however WireGuard uses fwmark to match
the packets instead of the source address.
[1] https://www.wireguard.com/netns/#improved-rule-based-routing
The table number is chosen as 30400 + iface_idx. That is, the range is
limited and we shouldn't handle more than 100 devices. Add a check for
that and error out.
The routes/rules that are configured are independent of the
order in which we process the devices. That is, because they
use the "iface_idx" for cases where there is ambiguity.
Still, it feels nicer to always process them in a defined order.
Sorted by iface_idx. The iface_idx is probably something useful and
stable, provided by the provider. E.g. it's the order in which
interfaces are exposed on the meta data.
get-config() gives a NMCSProviderGetConfigResult structure, and the
main part of data is the GHashTable of MAC addresses and
NMCSProviderGetConfigIfaceData instances.
Let NMCSProviderGetConfigIfaceData also have a reference to the MAC
address. This way, I'll be able to create a (sorted) list of interface
datas, that also contain the MAC address.
nm-cloud-setup automatically configures the network. That may conflict
with what the user wants. In case the user configures some specific
setup, they are encouraged to disable nm-cloud-setup (and its
automatism).
Still, what we do by default matters, and should play as well with
user's expectations. Configuring policy routing and a higher priority
table (30400+) that hijacks the traffic can cause problems.
If the system only has one IPv4 address and one interface, then there
is no point in configuring policy routing at all. Detect that, and skip
the change in that case.
Note that of course we need to handle the case where previously multiple
IP addresses were configured and an update gives only one address. In
that case we need to clear the previously configured rules/routes. The
patch achieves this.
Now that we return a struct from get_config(), we can have system-wide
properties returned.
Let it count and cache the number of valid iface_datas.
Currently that is not yet used, but it will be.
Returning a struct seems easier to understand, because then the result
is typed.
Also, we might return additional results, which are system wide and not
per-interface.
Naming is important, because the name of a thing should give you a good
idea what it does. Also, to find a thing, it needs a good name in the
first place. But naming is also hard.
Historically, some strv helper API was named as nm_utils_strv_*(),
and some API had a leading underscore (as it is internal API).
This was all inconsistent. Do some renaming and try to unify things.
We get rid of the leading underscore if this is just a regular
(internal) helper. But not for example from _nm_strv_find_first(),
because that is the implementation of nm_strv_find_first().
- _nm_utils_strv_cleanup() -> nm_strv_cleanup()
- _nm_utils_strv_cleanup_const() -> nm_strv_cleanup_const()
- _nm_utils_strv_cmp_n() -> _nm_strv_cmp_n()
- _nm_utils_strv_dup() -> _nm_strv_dup()
- _nm_utils_strv_dup_packed() -> _nm_strv_dup_packed()
- _nm_utils_strv_find_first() -> _nm_strv_find_first()
- _nm_utils_strv_sort() -> _nm_strv_sort()
- _nm_utils_strv_to_ptrarray() -> nm_strv_to_ptrarray()
- _nm_utils_strv_to_slist() -> nm_strv_to_gslist()
- nm_utils_strv_cmp_n() -> nm_strv_cmp_n()
- nm_utils_strv_dup() -> nm_strv_dup()
- nm_utils_strv_dup_packed() -> nm_strv_dup_packed()
- nm_utils_strv_dup_shallow_maybe_a() -> nm_strv_dup_shallow_maybe_a()
- nm_utils_strv_equal() -> nm_strv_equal()
- nm_utils_strv_find_binary_search() -> nm_strv_find_binary_search()
- nm_utils_strv_find_first() -> nm_strv_find_first()
- nm_utils_strv_make_deep_copied() -> nm_strv_make_deep_copied()
- nm_utils_strv_make_deep_copied_n() -> nm_strv_make_deep_copied_n()
- nm_utils_strv_make_deep_copied_nonnull() -> nm_strv_make_deep_copied_nonnull()
- nm_utils_strv_sort() -> nm_strv_sort()
Note that no names are swapped and none of the new names existed
previously. That means, all the new names are really new, which
simplifies to find errors due to this larger refactoring. E.g. if
you backport a patch from after this change to an old branch, you'll
get a compiler error and notice that something is missing.
nm-cloud-setup automatically detects routes, addresses and rules and configures them
on the device using the emphermal Reapply() API. That is, it does not modify the
existing profile (on disk), but changes the runtime configuration only.
As such, it used to wipe otherwise statically configured IP addresses, routes and
rules. That seems unnecessary. Let's keep the configuration from the (persistent)
configuration.
There is of course the problem that nm-cloud-setup doesn't really
understand the existing IP configuration, and it can only hope that
it can be meaningfully combined with what nm-cloud-setup wants to
configure. This should cover most simple cases, for more complex setups,
the user probably should disable nm-cloud-setup and configure the
network explicitly to their liking.
https://bugzilla.redhat.com/show_bug.cgi?id=1971527https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/893
The code never set "iface_get_config->cidr_addr", despite
setting "cidr_prefix" and "has_cidr". As a result, cloud-setup
would think that the subnet is "0.0.0.0/$PLEN", and calculate
the gateway as "0.0.0.1".
As a result it would add a default route to table 30400 via 0.0.0.1,
which is obviously wrong.
How to detect the right gateway? Let's try obtain the subnet also via
the meta data. That seems mostly correct, except that we only access
subnet at index 0. What if there are multiple ones? I don't know.
https://bugzilla.redhat.com/show_bug.cgi?id=1912236