This new endpoint type has been recently added to the kernel in v6.18
[1]. It will be used to create new subflows from the associated address
to additional addresses announced by the other peer. This will be done
if allowed by the MPTCP limits, and if the associated address is not
already being used by another subflow from the same MPTCP connection.
Note that the fullmesh flag takes precedence over the laminar one.
Without any of these two flags, the path-manager will create new
subflows to additional addresses announced by the other peer by
selecting the source address from the routing tables, which is harder to
configure if the announced address is not known in advance.
The support of the new flag is easy: simply by declaring a new flag for
NM, and adding it in the related helpers and existing checks looking at
the different MPTCP endpoint. The documentation now references the new
endpoint type.
Note that only the new 'define' has been added in the Linux header file:
this file has changed a bit since the last sync, now split in two files.
Only this new line is needed, so the minimum has been modified here.
Link: https://git.kernel.org/torvalds/c/539f6b9de39e [1]
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Rename nm_linux_platform_get_link_fdb_table() to
nm_linux_platform_get_bridge_fdb(). The new name better indicates that
the function returns the bridge FDB entries.
The RT_VIA attribute is used to specify a gateway of a different
address family. It is currently used only for IPv4 routes.
[bgalvani@redhat.com: amended the commit message]
We're going to replace most of the ioctl-based ethtool functions with
a netlink-based equivalent. Move the ioctl ones to a separate file so
that it's easier to see what still needs to be converted. Also add a
common prefix to the function names.
iproute2 and the kernel accept 0 as valid rto_min value:
# ip route add 172.16.0.1 dev enp1s0 rto_min 0ms
# ip route show
172.16.0.1 dev enp1s0 scope link rto_min lock 0ms
Even if a value of 0ms would not be useful in practice, it is better
to exactly track what kernel reports, instead of assuming that when
the value is zero it is "not set".
The function introduced queries the FDB table via netlink socket. It
accepts a list of ifindexes to filter out the FDB content not related to
it. It returns an array of MAC addresses.
To cltarify this function is unusually exposed directly on
nm-linux-platform.h as we don't want this be part of the whole
NMPlatform object or cache. This, is an exception to the rule to
simplify the integration of this functionality on NetworkManager.
In addition, it also doesn't use the async mechanism that is widely used
on netlink communication across nm-linux-platform. Again, the reason is
to simplify its use, as async communication won't provide a benefit to
the use cases we have planned for this, i.e balance-slb RARP announcing.
This patch add support to IPVLAN interface. IPVLAN is a driver for a
virtual network device that can be used in container environment to
access the host network. IPVLAN exposes a single MAC address to the
external network regardless the number of IPVLAN device created inside
the host network. This means that a user can have multiple IPVLAN
devices in multiple containers and the corresponding switch reads a
single MAC address. IPVLAN driver is useful when the local switch
imposes constraints on the total number of MAC addresses that it can
manage.
If the socket's RX buffer is full it's probably because other
process is doing lot of changes very quickly, faster than we
can process them. Let's give the writer a small time to finish:
1. Avoid contending the kernel's RTNL lock, so we don't make
the whole situation even worse and it can finish earlier.
2. Avoid having to resync again and again due to trying to
resync while the writer is still doing quick changes, so
we are unable to catch up yet.
This won't help if this situation takes a long time or is
continuous, but that's unlikely to happen, and if it does,
it's the writer's fault for starving the whole system.
There is no need to progresively increase the backoff time
for the same reason: if this situation takes lot of time,
it's the writer's fault. It's neither a good idea because the whole NM
process will end being sleeping long times, not doing anything at all,
without being able to react when the Netlink messages burst stops.
Currently, nm_platform_link_set_bridge_vlans() accepts an array of
pointers to vlan objects; to avoid multiple allocations,
setting_vlans_to_platform() creates the array by piggybacking the
actual data after the pointers array.
In the next commits, the array will need to be manipulated and
extended, which is difficult with the current structure. Instead, pass
separately an array of objects and its size.
In case the platform fails dumping a specific route protocol, retry
multiple times. If all attempts fail, emit a warning and proceed as
there is nothing more to do.
When doing a dump of routes, we want to exclude routes having
protocols we do not care about. Since the netlink socket has
STRICT_CHK enabled, we can request multiple dumps for the protocols we
need.
While doing 6 dumps is less efficient than doing 1, it normally
doesn't matter. However, the new implementation is more efficient when
there are e.g. millions of BGP routes that can be excluded from the
results.
Introduce an array of tracked route protocols that will be used in the
next commit. To have the list of protocols defined in a single place,
define a macro.
In the future we might want to specify filters when requesting netlink
dumps; this requires that strict check is enabled on the socket.
When enabling strict check, we need to pass a full struct in the
netlink message, otherwise kernel ignores it.
This commit doesn't change behavior.
When we recibe a Netlink message with a "route change" event, normally
we just ignore it if it's a route that we don't track (i.e. because of
the route protocol).
However, it's not that easy if it has the NLM_F_REPLACE flag because
that means that it might be replacing another route. If the kernel has
similar routes which are candidates for the replacement, it's hard for
NM to guess which one of those is being replaced (as the kernel doesn't
have a "route ID" or similar field to indicate it). Moreover, the kernel
might choose to replace a route that we don't have on cache, so we know
nothing about it.
It is important to note that we cannot just discard Netlink messages of
routes that we don't track if they has the NLM_F_REPLACE. For example,
if we are tracking a route with proto=static, we might receive a replace
message, changing that route to proto=other_proto_that_we_dont_track. We
need to process that message and remove the route from our cache.
As NM doesn't know what route is being replaced, trying to guess will
lead to errors that will leave the cache in an inconsistent state.
Because of that, it just do a cache resync for the routes.
For IPv4 there was an optimization to this: if we don't have in the
cache any route candidate for the replacement there are only 2 possible
options: either add the new route to the cache or discard it if we are
not interested on it. We don't need a resync for that.
This commit is extending that optimization to IPv6 routes. There is no
reason why it shouldn't work in the same way than with IPv4. This
optimization will only work well as long as we find potential candidate
routes in the same way than the kernel (comparing the same fields). NM
calls to this "comparing by WEAK_ID". But this can also happen with IPv4
routes.
It is worth it to enable this optimization because there are routing
daemons using custom routing protocols that makes tens or hundreds of
updates per second. If they use NLM_F_REPLACE, this caused NM to do a
resync hundreds of times per second leading to a 100% CPU usage:
https://issues.redhat.com/browse/RHEL-26195
An additional but smaller optimization is done in this commit: if we
receive a route message for routes that we don't track AND doesn't have
the NLM_F_REPLACE flag, we can ignore the entire message, thus avoiding
the memory allocation of the nmp_object. That nmp_object was going to be
ignored later, anyway, so better to avoid these allocations that, with
the routing daemon of the above's example, can happen hundreds of times
per second.
With this changes, the CPU usage doing `ip route replace` 300 times/s
drops from 100% to 1%. Doing `ip route replace` as fast as possible,
without any rate limitting, still keeps NM with a 3% CPU usage in the
system that I have used to test.
If sriov_totalvfs file doesn't exist we don't need to consider it a
fatal failure. Try to create the required number of VFs as we were doing
before.
Note: at least netdevsim doesn't have sriov_totalvfs file, I don't know
if there are real drivers that neither has it.
(cherry picked from commit 27eaf34fcf)
Set these parameters according to the values set in the new properties
sriov.eswitch-inline-mode and sriov.eswitch-encap-mode.
The number of parameters related to SR-IOV was becoming too big.
Refactor to group them in a NMPlatformSriovParams struct and pass it
around.
(cherry picked from commit 4669f01eb0)
It is not safe to change the eswitch mode when there are VFs already
created: it often fails, or even worse, doesn't fail immediatelly but
there are later problems with the VFs.
What is supposed to be well tested in all drivers is to change the
eswitch mode with no VFs created, and then create the VFs, so let's set
num_vfs=0 before changing the eswitch mode.
As we want to change num_vfs asynchronously in a separate thread, we
need to do a multi-step process with callbacks each time that a step
finish (before it was just set num_vfs asynchronously and invoke the
callback when it's done).
This makes link_set_sriov_params_async to become even larger and more
complex than it already was. Refactor it to make it cleaner and easier
to follow, and hopefully less error prone, and implement that multi-step
process.
(cherry picked from commit 770340627b)
When doing reapply on linux bridge interface, NetworkManager will reset
the VLAN filtering and default PVID which cause PVID been readded to all
bridge ports regardless they are managed by NetworkManager.
This is because Linux kernel will re-add PVID to bridge port upon the
changes of bridge default-pvid value.
To fix the issue, this patch introduce netlink parsing code for
`vlan_filtering` and `default_pvid` of NMPlatformLnkBridge, and use that
to compare desired VLAN filtering settings, skip the reset of VLAN
filter if `default_pvid` and `vlan_filtering` are unchanged.
Signed-off-by: Gris Ge <fge@redhat.com>
(cherry picked from commit 02c34d538c)
The supervision address is read-only. It is constructed by kernel and
only the last byte can be modified by setting the multicast-spec as
documented indeed.
As 1.46 was not released yet, we still can drop the whole API for this
setting property. We are keeping the NMDeviceHsr property as it is a
nice to have for reading it.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1823
Fixes: 5426bdf4a1 ('HSR: add support to HSR/PRP interface')
This patch add support to HSR/PRP interface. Please notice that PRP
driver is represented as HSR too. They are different drivers but on
kernel they are integrated together.
HSR/PRP is a network protocol standard for Ethernet that provides
seamless failover against failure of any network component. It intends
to be transparent to the application. These protocols are useful for
applications that request high availability and short switchover time
e.g electrical substation or high power inverters.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1791
glib's MIN()/MAX() will be replaced by NM_MIN()/NM_MAX().
There are however a few places where NM_MIN()/NM_MAX() cannot
be used.
Adjust those places to use NM_MIN_CONST()/NM_MAX_CONST() instead.
sysfs is deprecated and kernel will not add new bridge port options to
sysfs. Netlink is a stable API and therefore is the right method to
communicate with kernel in order to set the link options.
Remove all the code that was added for the CSME coexistence.
The Intel WiFi team can't commit on when, if at all, this feature will
be completely integrated and tested in the NetworkManager.
The preferred solution for now is the solution that involves the kernel
only.
Remove the code that was merged so far.
The macro uses g_alloca(). Using alloca() is potentially dangerous. For
example, it must never be used in an unbounded loop. This should be
immediately obvious from the name, so we don't accidentally use them
in the wrong context.
All other alloca() macros should have such a prefix already. And they
always have to be macros, because you couldn't use alloca() to return
memory from a function.