"Impossible to set rd.ethtool options: invalid format" is not very
clear. Try to explain what is invalid about the format (the interface
name is missing).
"Invalid value for rd.ethtool.autoneg, rd.ethtool.autoneg was not set"
is also confusing. The message gets printed if the autoneg value was
specified on the command line, so "was not set" seems wrong. Maybe the
message meant that the profile value is left at the default (FALSE),
but that isn't very clear.
Reword.
The idea of positional arguments is that they might be extended in the
future. That means, there might be an option "rd.ethtool:eth0:::foo".
Also, if multiple "rd.ethtool:eth0" options are specified on the command
line, then the autoneg/speed settings should only be set if present.
That means
"rd.ethtool:eth0:on:100 rd.ethtool:eth0:::foo"
should work as expected and first set autoneg/speed options, but the
second argument only sets "foo" (without resetting autoneg/speed).
To NetworkManager, "autoneg=FALSE && speed=0" has the meaning to
not configure these options and leave whatever is configured previously.
That is also the default.
Explicitly configuring "rd.ethtool=eth0:off:0" is thus likely a misconfiguration,
because it tells NetworkManager to not configure the interface.
Note that the user can configure that, via "rd.ethtool=eth0::", that
is by omitting all parameters. That is a valid configuration and causes
no warning. The reason to support this silently, is so that we can
add in the future more positional arguments that the user can set
without changing autoneg/speed.
The point of positional arguments is that you can omit them, and that
should be treated as the parameter being set to the default.
So, don't treat "rd.ethtool=eth0" (or "rd.ethtool=eth0:") special.
Just continue the parsing and take all following positional arguments
as unset.
Don't return early from parsing "autoneg", if there are not additional
arguments.
The behavior should be exactly the same, whether a positional
argument is missing, empty, or set to the default.
That is,
- "rd.ethtool=eth0:on"
- "rd.ethtool=eth0🔛"
- "rd.ethtool=eth0🔛:"
- "rd.ethtool=eth0🔛0:"
should all evaluate the same thing.
That was already the case in practice, but that was hard to see.
So don't treat missing positional arguments special and don't return
early. Parse all parameters regardless.
The change is visible when parsing "rd.ethtool=eth0:off:100 rd.ethtool=eth0:on".
Autoneg and speed really belongs together, so when we parse the second
argument, we should reset the speed too -- even if it's not present.
NMNDisc has two implementations: lndp and fake. Fake only exists as a
stub for unit tests, otherwise there is no purpose to it. Also, we won't
ever add another implementation beside lndp. If lndp is not suitable, it
would be replaced, but not accompanied by a second implementation.
As such, nm_lndp_ndisc_get_sysctl() has no purpose to be in
"nm-lndp-ndisc.c". This split does not exist to abstract "nm-ndisc.c"
from NMPlatform. It exists to make it easier to test.
We no longer use tc objects from the platform cache; disable caching
by default.
The only exception where the cache is needed is in tc tests, as we
look into the platform there to check that objects look as expected.
Introduce a construct-only property for platform objects to enable or
disable the caching of tc objects. When disabled, the netlink socket
doesn't receive netlink events for tc objects, and objects are never
added to the cache. This commit doesn't change behavior yet.
Stage2 can be called multiple times. Ensure that tc_commit() is only
called the first time. This is important now that tc synchronization
requires to clear all qdiscs and recreate them.
Update nm_platform_qdisc_sync() and nm_platform_tfilter_sync() to
avoid looking into the platform cache, so that we no longer require to
keep tc and qdiscs in the cache.
There is no API in kernel to retrieve tc objects only for a specific
interface, so NM had to receive all tc events, even for unmanaged
interfaces. This could cause high CPU usage in some scenarios with
many objects.
Instead, try to delete root qdiscs and filters and then add the known
ones.
Also, combine the two functions together since they are related. In
particular, removing all qdiscs also removes all attached filters.
There is always a question between convenience of allowing %NULL (and
do nothing) and strictly require the user to check the argument to not
be %NULL. In this case, it's more convenient to accept NULL, than require
the callers to check for it.
Background
==========
Imagine you run a container on your machine. Then the routing table
might look like:
default via 10.0.10.1 dev eth0 proto dhcp metric 100
10.0.10.0/28 dev eth0 proto kernel scope link src 10.0.10.5 metric 100
[...]
10.42.0.0/24 via 10.42.0.0 dev flannel.1 onlink
10.42.1.2 dev cali02ad7e68ce1 scope link
10.42.1.3 dev cali8fcecf5aaff scope link
10.42.2.0/24 via 10.42.2.0 dev flannel.1 onlink
10.42.3.0/24 via 10.42.3.0 dev flannel.1 onlink
That is, there are another interfaces with subnets and specific routes.
If nm-cloud-setup now configures rules:
0: from all lookup local
30400: from 10.0.10.5 lookup 30400
32766: from all lookup main
32767: from all lookup default
and
default via 10.0.10.1 dev eth0 table 30400 proto static metric 10
10.0.10.1 dev eth0 table 30400 proto static scope link metric 10
then these other subnets will also be reached via the default route.
This container example is just one case where this is a problem. In
general, if you have specific routes on another interface, then the
default route in the 30400+ table will interfere badly.
The idea of nm-cloud-setup is to automatically configure the network for
secondary IP addresses. When the user has special requirements, then
they should disable nm-cloud-setup and configure whatever they want.
But the container use case is popular and important. It is not something
where the user actively configures the network. This case needs to work better,
out of the box. In general, nm-cloud-setup should work better with the
existing network configuration.
Change
======
Add new routing tables 30200+ with the individual subnets of the
interface:
10.0.10.0/24 dev eth0 table 30200 proto static metric 10
[...]
default via 10.0.10.1 dev eth0 table 30400 proto static metric 10
10.0.10.1 dev eth0 table 30400 proto static scope link metric 10
Also add more important routing rules with priority 30200+, which select
these tables based on the source address:
30200: from 10.0.10.5 lookup 30200
These will do source based routing for the subnets on these
interfaces.
Then, add a rule with priority 30350
30350: lookup main suppress_prefixlength 0
which processes the routes from the main table, but ignores the default
routes. 30350 was chosen, because it's in between the rules 30200+ and
30400+, leaving a range for the user to configure their own rules.
Then, as before, the rules 30400+ again look at the corresponding 30400+
table, to find a default route.
Finally, process the main table again, this time honoring the default
route. That is for packets that have a different source address.
This change means that the source based routing is used for the
subnets that are configured on the interface and for the default route.
Whereas, if there are any more specific routes in the main table, they will
be preferred over the default route.
Apparently Amazon Linux solves this differently, by not configuring a
routing table for addresses on interface "eth0". That might be an
alternative, but it's not clear to me what is special about eth0 to
warrant this treatment. It also would imply that we somehow recognize
this primary interface. In practise that would be doable by selecting
the interface with "iface_idx" zero.
Instead choose this approach. This is remotely similar to what WireGuard does
for configuring the default route ([1]), however WireGuard uses fwmark to match
the packets instead of the source address.
[1] https://www.wireguard.com/netns/#improved-rule-based-routing
The table number is chosen as 30400 + iface_idx. That is, the range is
limited and we shouldn't handle more than 100 devices. Add a check for
that and error out.
The routes/rules that are configured are independent of the
order in which we process the devices. That is, because they
use the "iface_idx" for cases where there is ambiguity.
Still, it feels nicer to always process them in a defined order.
Sorted by iface_idx. The iface_idx is probably something useful and
stable, provided by the provider. E.g. it's the order in which
interfaces are exposed on the meta data.
get-config() gives a NMCSProviderGetConfigResult structure, and the
main part of data is the GHashTable of MAC addresses and
NMCSProviderGetConfigIfaceData instances.
Let NMCSProviderGetConfigIfaceData also have a reference to the MAC
address. This way, I'll be able to create a (sorted) list of interface
datas, that also contain the MAC address.
nm-cloud-setup automatically configures the network. That may conflict
with what the user wants. In case the user configures some specific
setup, they are encouraged to disable nm-cloud-setup (and its
automatism).
Still, what we do by default matters, and should play as well with
user's expectations. Configuring policy routing and a higher priority
table (30400+) that hijacks the traffic can cause problems.
If the system only has one IPv4 address and one interface, then there
is no point in configuring policy routing at all. Detect that, and skip
the change in that case.
Note that of course we need to handle the case where previously multiple
IP addresses were configured and an update gives only one address. In
that case we need to clear the previously configured rules/routes. The
patch achieves this.
Now that we return a struct from get_config(), we can have system-wide
properties returned.
Let it count and cache the number of valid iface_datas.
Currently that is not yet used, but it will be.
Returning a struct seems easier to understand, because then the result
is typed.
Also, we might return additional results, which are system wide and not
per-interface.
The majority of times when I call this script, I want it to do the reformatting,
not the check-only mode. This is also because we use git, so I start with a
clean working directory and run the reformatting code. In the best case, there
is nothing to reformat, and all is good. I seldom want to only check.
Change the default of the script.
"nm-code-format.sh" is going to change the default behavior from "-n" to
"-i", that is, from check-only to reformat. Explicitly pass "-n" where
we want it.
"nm-code-format.sh" is going to change the default behavior from "-n" to
"-i", that is, from check-only to reformat. Explicitly pass "-n" where
we want it.
"assuming" means to gracefully take over after restart. The result
should be a working configuration with a device fully managed by
NetworkManager.
If we are assuming, and the interface is down we still want to set it
up.
I don't think this warrants a warning. It's important to keep the number
of warnings and errors in the log low, and only print such messages if
there is really something that requires attention by the user. If you
run without /etc/network/interfaces, then this is pretty much expected
and the warning isn't going to tell you anything useful.
NML3Cfg tracks the state of each object (that is addresses and routes).
Previously, it had a boolean flag "os_in_platform", that should be
true if (and only if) we have a corresponding NMPObject in the platform
cache.
But NMPObjects are immutable and ref-counted. That means, we can just as
well track the reference to the NMPObject from the cache. The advantage
is that we have an index (dictionary) to find the object state, and by
tracking the platform object, we have it easily accessible.
NML3ConfigData is supposed to be immutable. It can be initialized from a
NMConnection, and its DNS priority property might be zero.
For the DNS priority, the value can be overwritten by global defaults.
We thus need to inject the default value at the right place.
We will use these values from NML3Cfg, and it seems wrong that NML3Cfg
would include "dns/nm-dns-manager.h" for this.
Enums are very "static". They have no logic, and there is less need to
separate the code well. Meaning, it doesn't hurt to define this enum
in "libnm-base/nm-base.h" which can be included by (almost) anybody.
There was always the idea that you could pass paths and filenames
to "nm-code-format.sh" to format only a subset. However, the script
also needs to honor files that should be excluded and don't need
formatting.
Previously, what was implemented via `git ls-files -- ':(exclude)...'`
command, but git-ls-files has a bug ([1]) and might not list all files.
Refactor and do the filtering ourselves.
[1] https://www.spinics.net/lists/git/msg397982.html