We often create the source with default priority, no destroy function and
attach it to the default context (g_main_context_default()). For that
case, we have wrapper functions like nm_g_timeout_add_source()
and nm_g_idle_add_source(). Use those.
There should be no change in behavior.
This allows to fetch the information about the AP that CSME if connected
to. It'll allow us to connect to the exact same AP and shaving off the
scan from the connection, improving the connection time.
Add support for IPv6 multipath routes, by treating them as single-hop
routes. Otherwise, we can easily end up with an inconsistent platform
cache.
Background:
-----------
Routes are hard. We have NMPlatform which is a cache of netlink objects.
That means, we have a hash table and we cache objects based on some
identity (nmp_object_id_equal()). So those objects must have some immutable,
indistinguishable properties that determine whether an object is the
same or a different one.
For routes and routing rules, this identifying property is basically a subset
of the attributes (but not all!). That makes it very hard, because tomorrow
kernel could add an attribute that becomes part of the identity, and NetworkManager
wouldn't recognize it, resulting in cache inconsistency by wrongly
thinking two different routes are one and the same. Anyway.
The other point is that we rely on netlink events to maintain the cache.
So when we receive a RTM_NEWROUTE we add the object to the cache, and
delete it upon RTM_DELROUTE. When you do `ip route replace`, kernel
might replace a (different!) route, but only send one RTM_NEWROUTE message.
We handle that by somehow finding the route that was replaced/deleted. It's
ugly. Did I say, that routes are hard?
Also, for IPv4 routes, multipath attributes are just a part of the
routes identity. That is, you add two different routes that only differ
by their multipath list, and then kernel does as you would expect.
NetworkManager does not support IPv4 multihop routes and just ignores
them.
Also, a multipath route can have next hops on different interfaces,
which goes against our current assumption, that an NMPlatformIP4Route
has an interface (or no interface, in case of blackhole routes). That
makes it hard to meaningfully support IPv4 routes. But we probably don't
have to, because we can just pretend that such routes don't exist and
our cache stays consistent (at least, until somebody calls `ip route
replace` *sigh*).
Not so for IPv6. When you add (`ip route append`) an IPv6 route that is
identical to an existing route -- except their multipath attribute -- then it
behaves as if the existing route was modified and the result is the
merged route with more next-hops. Note that in this case kernel will
only send a RTM_NEWROUTE message with the full multipath list. If we
would treat the multipath list as part of the route's identity, this
would be as if kernel deleted one routes and created a different one (the
merged one), but only sending one notification. That's a bit similar to
what happens during `ip route replace`, but it would be nightmare to
find out which route was thereby replaced.
Likewise, when you delete a route, then kernel will "subtract" the
next-hop and sent a RTM_DELROUTE notification only about the next-hop that
was deleted. To handle that, you would have to find the full multihop
route, and replace it with the remainder after the subtraction.
NetworkManager so far ignored IPv6 routes with more than one next-hop, this
means you can start with one single-hop route (that NetworkManger sees
and has in the platform cache). Then you create a similar route (only
differing by the next-hop). Kernel will merge the routes, but not notify
NetworkManager that the single-hop route is not longer a single-hop
route. This can easily cause a cache inconsistency and subtle bugs. For
IPv6 we MUST handle multihop routes.
Kernels behavior makes little sense, if you expect that routes have an
immutable identity and want to get notifications about addition/removal.
We can however make sense by it by pretending that all IPv6 routes are
single-hop! With only the twist that a single RTM_NEWROUTE notification
might notify about multiple routes at the same time. This is what the
patch does.
The Patch
---------
Now one RTM_NEWROUTE message can contain multiple IPv6 routes
(NMPObject). That would mean that nmp_object_new_from_nl() needs to
return a list of objects. But it's not implemented that way. Instead,
we still call nmp_object_new_from_nl(), and the parsing code can
indicate that there is something more, indicating the caller to call
nmp_object_new_from_nl() again in a loop to fetch more objects.
In practice, I think all RTM_DELROUTE messages for IPv6 routes are
single-hop. Still, we implement it to handle also multi-hop messages the
same way.
Note that we just parse the netlink message again from scratch. The alternative
would be to parse the first object once, and then clone the object and
only update the next-hop. That would be more efficient, but probably
harder to understand/implement.
https://bugzilla.redhat.com/show_bug.cgi?id=1837254#c20
To parse the RTA_MULTIHOP message, "policy" is not right (which is used
to parse the overall message). Instead, we don't really have a special
policy that we should use.
This was not a severe issue, because the allocated buffer (with
G_N_ELEMENTS(policy) elements) was larger than need be. And apparently,
using the wrong policy also didn't cause us to reject important
messages.
The variable with this purpose is usually called "IS_IPv4".
It's upper case, because usually this is a const variable, and because
it reminds of the NM_IS_IPv4(addr_family) macro. That letter case
is unusual, but it makes sense to me for the special purpose that this
variable has.
Anyway. The naming of this variable is a different point. Let's
use the variable name that is consistent and widely used.
We always call nl_recv() in a loop. If it would be necessary to clear
the variable, then it would need to happen inside the loop. But it's
not necessary.
Instead of allocating a receive buffer for each nl_recv() call, re-use a
pre-allocated buffer.
The buffer is part of NMPlatform and kept around. As we would not have more
than one NMPlatform instance per netns, we waste a limited amount of
memory.
The buffer gets initialized with 32k, which should be large enough for
any rtnetlink message that we might receive. As before, if the buffer
would be too small, then the first large message would be lost (as we don't
peek). But then we would allocate a larger buffer, resync the platform cache
and recover.
Add parameter to accept a pre-allocated buffer for nl_recv(). In
practice, rtnetlink messages are not larger than 32k, so we can always
pre-allocate it and avoid the need to heap allocate it.
In the past, nl_recv() was libnl3 code and thus used malloc()/realloc() to
allocate the buffer. That meant, we should free the buffer with libc's free()
function instead of glib's g_free(). That is what nm_auto_free is for.
Nowadays, nl_recv() is forked and glib-ified, and uses the glib wrappers to
allocate the buffer. Thus the buffer should also be freed with g_free()
(and gs_free).
In practice there is no difference, because glib's allocation directly
uses libc's malloc/free. This is purely a matter of style.
We need to support systems where there are hundereds of thousand routes
(e.g. BGP software).
For that, we will ignore routes that have a rtm_protocol value which was
certainly not created by NetworkManager. Before we had a BPF filer to
filter them out, but that had issues and is removed for now. Still,
don't put those routes in the platform cache and ignore them early.
Note that we still deserialize the RTM_NEWROUTE message to a NMPObject,
and don't shortcut earlier. The reason is that we should still call
delayed_action_refresh_all_in_progress() and handle the RTM_GETROUTE
response, even if the route has a different protocol. So we error out
later, shortly before putting the object in the cache. This means we
will malloc() a NMPObject and initialize it, but that is probably cheap,
compared to the first problem that the process already had to wake up
and read the netlink socket.
This restores the effective behavior of the BPF filter, albeit with a
higher overhead, as the route is rejected later. The important part for
now is that we stick to the behavior of not caching certain routes of a
certain protocol. If that can in the future be optimized (e.g. by a new
BPF filter), then we should do that. But for now that would be only a
future performance improvement, which requires new profiling first and
that has not the highest priority. Note that not caching certain routes
already should reduce the largest part of the overhead that those routes
brings. Whether this form is sufficient to reach the expected
performance goals, needs to be measured in the future.
The socket BPF filter operates on the entire message (skb). One netlink
message (for RTM_NEWROUTE) usually contains a list of routes (struct
nlmsghdr), but the BPF filter was only looking at the first route to decide
whether to throw away the entire message. That is wrong.
This causes that we miss routes. It also means that the response to our
RTM_GETROUTE dump request could be filtered and we poll/wait (unsuccessfully)
for it:
<trace> [1641829930.4111] platform-linux: netlink: read: wait for ACK for sequence number 26...
To get this right, the BPF filter would have to look at all routes in the
list to find out whether there are any we want to receive (and if
there are any, to pass the entire message, regardless that is might also
contain routes we want to ignore). As BPF does not support loops, that
is not trivial. Arguably, we still could do something where we look at a
bounded, unrolled number of routes, in the hope that there are not more
routes in one message that we support. The problem is that this BPF
filter is supposed to filter out massive amounts of routes, so most
messages will be filled to the brim with routes and we would have to
expect a high number of routes in the RTM_NEWROUTE that need checking.
We could still ignore routes in _new_from_nl_route(), to enter the
cache. That means, we would catch them later than with the BPF filter,
but still before a lot of the overhead of handling them. That probably
will be done next, but for now revert the commit.
This reverts commit e9ca5583e5.
https://bugzilla.redhat.com/show_bug.cgi?id=2037411
Try to debug a hang in platform code, presumably during poll().
This logging seems useful for debugging this particular issue,
but it might be useful in general.
Routing daemons can add a large amount of routes to the
system. Currently NM receives netlink notifications for all those
routes and exposes them on D-Bus. With many routes, the daemon becomes
increasingly slow and uses a lot of memory.
The rtm_protocol field of the route indicates the source of the
route. From /usr/include/linux/rtnetlink.h, the allowed values are:
#define RTPROT_UNSPEC 0
#define RTPROT_REDIRECT 1 /* Route installed by ICMP redirects;
not used by current IPv4 */
#define RTPROT_KERNEL 2 /* Route installed by kernel */
#define RTPROT_BOOT 3 /* Route installed during boot */
#define RTPROT_STATIC 4 /* Route installed by administrator */
/* Values of protocol >= RTPROT_STATIC are not interpreted by kernel;
they are just passed from user and back as is.
It will be used by hypothetical multiple routing daemons.
Note that protocol values should be standardized in order to
avoid conflicts.
*/
#define RTPROT_GATED 8 /* Apparently, GateD */
#define RTPROT_RA 9 /* RDISC/ND router advertisements */
#define RTPROT_MRT 10 /* Merit MRT */
#define RTPROT_ZEBRA 11 /* Zebra */
#define RTPROT_BIRD 12 /* BIRD */
#define RTPROT_DNROUTED 13 /* DECnet routing daemon */
#define RTPROT_XORP 14 /* XORP */
#define RTPROT_NTK 15 /* Netsukuku */
#define RTPROT_DHCP 16 /* DHCP client */
#define RTPROT_MROUTED 17 /* Multicast daemon */
#define RTPROT_KEEPALIVED 18 /* Keepalived daemon */
#define RTPROT_BABEL 42 /* Babel daemon */
#define RTPROT_OPENR 99 /* Open Routing (Open/R) Routes */
#define RTPROT_BGP 186 /* BGP Routes */
#define RTPROT_ISIS 187 /* ISIS Routes */
#define RTPROT_OSPF 188 /* OSPF Routes */
#define RTPROT_RIP 189 /* RIP Routes */
#define RTPROT_EIGRP 192 /* EIGRP Routes */
Since NM uses only values <= RTPROT_STATIC, plus RTPROT_RA and
RTPROT_DHCP, add a BPF filter to the netlink socket to discard
notifications for other route types.
https://bugzilla.redhat.com/show_bug.cgi?id=1861527https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1038
We use clang-format for automatic formatting of our source files.
Since clang-format is actively maintained software, the actual
formatting depends on the used version of clang-format. That is
unfortunate and painful, but really unavoidable unless clang-format
would be strictly bug-compatible.
So the version that we must use is from the current Fedora release, which
is also tested by our gitlab-ci. Previously, we were using Fedora 34 with
clang-tools-extra-12.0.1-1.fc34.x86_64.
As Fedora 35 comes along, we need to update our formatting as Fedora 35
comes with version "13.0.0~rc1-1.fc35".
An alternative would be to freeze on version 12, but that has different
problems (like, it's cumbersome to rebuild clang 12 on Fedora 35 and it
would be cumbersome for our developers which are on Fedora 35 to use a
clang that they cannot easily install).
The (differently painful) solution is to reformat from time to time, as we
switch to a new Fedora (and thus clang) version.
Usually we would expect that such a reformatting brings minor changes.
But this time, the changes are huge. That is mentioned in the release
notes [1] as
Makes PointerAligment: Right working with AlignConsecutiveDeclarations. (Fixes https://llvm.org/PR27353)
[1] https://releases.llvm.org/13.0.0/tools/clang/docs/ReleaseNotes.html#clang-format
Completely rework IP configuration in the daemon. Use NML3Cfg as layer 3
manager for the IP configuration of an interface. Use NML3ConfigData as
pieces of configuration that the various components collect and
configure. NMDevice is managing most of the IP configuration at a higher
level, that is, it starts DHCP and other IP methods. Rework the state
handling there.
This is a huge rework of how NetworkManager daemon handles IP
configuration. Some fallout is to be expected.
It appears the patch deletes many lines of code. That is not accurate, because
you also have to count the files `src/core/nm-l3*`, which were unused previously.
Co-authored-by: Beniamino Galvani <bgalvani@redhat.com>
Introduce a construct-only property for platform objects to enable or
disable the caching of tc objects. When disabled, the netlink socket
doesn't receive netlink events for tc objects, and objects are never
added to the cache. This commit doesn't change behavior yet.
While NMUtilsIPv6IfaceId is only 8 bytes large, it seems unidiomatic to
pass the plain struct around.
With a "const NMUtilsIPv6IfaceId *" argument it is more clear what the
meaning of this is.
Change to use pointers.
The preference for IPv6 routes was added in kernel v4.1,
22 June 2015. It is even in latest RHEL7 kernels.
Drop trying to be compatible with such old kernels.
We use extended IFA_FLAGS for IFA_F_MANAGETEMPADDR (IPv6) and
IFA_F_NOPREFIXROUTE (IPv4 and IPv6).
These flags for IPv4 were added to kernel 3.14, 30 March, 2014.
The flag for IPv4 was added to kernel 4.4, 11 January 2016.
Even latest RHEL-7 kernels have backport for IFA_F_NOPREFIXROUTE
for IPv4 (rh#1221311).
Drop this. The backward compatibility code paths are likely broken
anyway, and add considerable complexity.
This is supported since kernel 3.17, dated 5 October, 2014. Drop the backward
compatibility for that.
It's very hard to sensibly support a mode where we set the interface up,
but prevent kernel from enabling IPv6. We would hack around that by disabling
IPv6 altogether.
But these code paths are not tested and likely make no sense. And it's hard
to implement a sensible behavior in this case anyway.
The term "user_ipv6ll" is confusing and not something somebody familiar
with kernel or `ip -d link` would understand.
Also, it maps a boolean to addr-gen-mode "none" or "eui64", although
there are 2 more address generation modes in kernel.
Don't abstract the underlying API, and name things as they are in
kernel.
Replace the arguments "buf+length" of
`nm_platform_link_get_permanent_address()` with "NMPLinkAddress *out_addr"
Signed-off-by: Wen Liang <liangwen12year@gmail.com>
Add `l_perm_address` in `NMPlatformLink` and add it to
`nm_platform_link_to_string`, `nm_platform_link_hash_update`,
`nm_platform_link_cmp` functions, and parse it from netlink.
Signed-off-by: Wen Liang <liangwen12year@gmail.com>
The "utils" part does not seem useful in the name.
Note that we also have NMStrBuf, which is named nm_str_buf_*().
There is an unfortunate similarity between the two, but it's still
distinct enough (in particular, because one takes an NMStrBuf and
the other not).
Having two functions like link_set_x() and link_set_nox() it is not a
good idea. This patch is introducing nm_platform_link_change_flags().
This allow flag modification directly, so the developer does not need to
define the virtual functions all the time everywhere.
Signed-off-by: Fernando Fernandez Mancera <ffmancera@riseup.net>
We have a cache for sysctl values, so that we can log changes and
previous values.
When resetting the log level, we prune that cache, which is done by
_nm_logging_clear_platform_logging_cache(). That function is called
by nm_logging_setup(), which is guaranteed to only happen on the main
thread.
NMPlatform in general is not thread safe (meaning, that the same NMPlatform
instance cannot be used by multiple threads at the same time). There is however
a reasonable aim that you could use different NMPlatform instances on their
own threads.
That currently doesn't work, mainly due to nm-logging which always must
be done from the main thread -- unless we would set NM_THREAD_SAFE_ON_MAIN_THREAD
in all of NMPlatform (which would be too expensive for something we
don't actually need). That means also the sysctl getter must only be
called on the main thread an all was good already.
Still, we could have NMPlatform usable from multiple thread by setting
NM_THREAD_SAFE_ON_MAIN_THREAD. As we are almost there to have the code
thread-safe, make accessing the sysctl value cache thread-safe (even if
we currently don't actually access it from multiple thread).