When unenslaving an interface from a bridge, kernel sends a RTM_DELLINK
message with ifi_family AF_BRIDGE. We only care about regular
RTM_DELLINK/RTM_NEWLINK messages, ignore all but ifi_family AF_UNSPEC.
There is also test_nl_bugs_spuroius_dellink(), added in commit
8a87a91813 for related bug rh#1302037.
The workaround was masking a bug in NetworkManager (to not ignore AF_BRIDGE
messages) and can now be removed as well.
Also downgrade <error> logging messages to <warn>. An external
condition should never be able to trigger an <error>, and clearly
there is always a external race that can cause a netlink command
to fail.
Let nm_platform_ip_route_add() and friends return an NMPlatformError
failure reason.
Also, do_add_addrroute() did not return the response from kernel.
Instead, it determined success/failure based on the presence of the
object in the cache. That is racy and does not allow to give a failure
reason from kernel.
Instead, determine success solely based on the netlink reply from
kernel. The received errno shall be authorative, there is no need
to second guess the response.
There is a problem that netlink is not a reliable protocol. In case
of receive buffer overflow, the response is lost and we don't know
whether the command succeeded (it likely did). It's unclear how to fix
that, but for now just return "unspecified" error. We probably avoid
that already by having a huge buffer size.
Also, downgrade the error message to <warn> level. <error> is really
for bugs only.
Inspired from iproute2. As such, don't use libnl3's "struct nl_msg", but
add _nl_addattr_l() and use a stack-allocated "struct nlmsghdr". With
this, we are closer to the raw netlink API. It really is simple enough.
The complicated part of the patch is that we re-use the existing netlink
socket for events. Hence, we must process the socket via our common
event_handler_recvmsgs(). That also means, that we get the netlink
response a few layers down the stack and have to return the result
via DelayedActionWaitForNlResponseData.
This will make us stop worry how relevant are chunks of compat code with
older kernels when deciding whether it's worth supporting/testing them.
As if we actually were testing old kernels.
- cache the result in NMPlatformPrivate. No need to call the virtual
function every time. The result is not ever going to change.
- if we are unable to detect support, assume support. Those features
were added quite a while ago to kernel, we should default to "support".
Note, that we detect support based on the presence of the absence of
certain netlink flags. That means, we will still detect no support.
The only moment when we actually use the fallback value, is when we
didn't encounter an RTM_NEWADDR or AF_INET6-IFLA_AF_SPEC message yet,
which would be very unusual, because we fill the cache initially and
usually will have some addresses there.
- for no strong reason, track "undetected" as numerical value zero,
and "support"/"no-support" as 1/-1. We already did that previously for
_support_user_ipv6ll, so this just unifies the implementations.
The minor reason is that this puts @_support_user_ipv6ll to the BSS
section and allows us to omit initializing priv->check_support_user_ipv6ll_cached
in platforms constructor.
- detect _support_kernel_extended_ifa_flags also based on IPv4
RTM_NEWADDR messages. Originally, extended flags were added for IPv6,
and later to IPv4 as well. Once we see an IPv4 message with IFA_FLAGS,
we know we have support.
Deleting an IPv4 route with metric zero will either delete the intended route,
or if no such route exists, it will delete another existing route with different
metric (but otherwise matching parameters).
I think this is a shortcoming of the kernel API. It allows omitting
the metric during delete. However, it gives not way to express to
explicitly delete an IPv4 route with metric zero, but no other.
Since we only delete routes that we obtain from the platform cache
in the first place, we don't need the workaround. Of course, there
is still a race that platform cache might be out of date at the
moment we attempt to delete the route. Or the cache might be
inconsistent, both cases leading to deletion of the wrong route.
But such cases should be very rare, and only present when the user
changes the routing table outside of NM.
Until now, NetworkManager's platform cache for routes used the quadruple
network/plen,metric,ifindex for equaliy. That is not kernel's
understanding of how routes behave. For example, with `ip route append`
you can add two IPv4 routes that only differ by their gateway. To
the previous form of platform cache, these two routes would wrongly
look identical, as the cache could not contain both routes. This also
easily leads to cache-inconsistencies.
Now that we have NM_PLATFORM_IP_ROUTE_CMP_TYPE_ID, fix the route's
compare operator to match kernel's.
Well, not entirely. Kernel understands more properties for routes then
NetworkManager. Some of these properties may also be part of the ID according
to kernel. To NetworkManager such routes would still look identical as
they only differ in a property that is not understood. This can still
cause cache-inconsistencies. The only fix here is to add support for
all these properties in NetworkManager as well. However, it's less serious,
because with this commit we support several of the more important properties.
See also the related bug rh#1337855 for kernel.
Another difficulty is that `ip route replace` and `ip route change`
changes an existing route. The replaced route has the same
NM_PLATFORM_IP_ROUTE_CMP_TYPE_WEAK_ID, but differ in the actual
NM_PLATFORM_IP_ROUTE_CMP_TYPE_ID:
# ip -d -4 route show dev v
# ip monitor route &
# ip route add 192.168.5.0/24 dev v
192.168.5.0/24 dev v scope link
# ip route change 192.168.5.0/24 dev v scope 10
192.168.5.0/24 dev v scope 10
# ip -d -4 route show dev v
unicast 192.168.5.0/24 proto boot scope 10
Note that we only got one RTM_NEWROUTE message, although from NMPCache's
point of view, a new route (with a particular ID) was added and another
route (with a different ID) was deleted. The cumbersome workaround is,
to keep an ordered list of the routes, and figure out which route was
replaced in response to an RTM_NEWROUTE. In absence of bugs, this should
work fine. However, as we only rely on events, we might wrongly
introduce a cache-inconsistancy as well. See the related bug rh#1337860.
Also drop nm_platform_ip4_route_get() and the like. The ID of routes
is complex, so it makes little sense to look up a route directly.
NMPCache can preserve the order of the objects. Until now, the order
was however arbitrary. Soon we will require to preserve the order of
routes.
During a dump, force appending new objects at the end. That ensures,
correct ordering during the dump.
Note that we track objects in several distrinct indexes. Those partition the
set of all objects. Outside a dump when receiving events about new objects (e.g.
RTM_NEWROUTE), it is very unclear at which place the new object should be sorted.
It is especially unclear, as an object might move from one partition (of
an index) to another.
In general, a deterministic order will only be useful in one particular
instance: the NMP_CACHE_ID_TYPE_ROUTES_BY_DESTINATION index for routes.
In this case, we will ensure a particular order of the routes.
The new device type represents a PPP interface, and will implement the
activation of new-style PPPoE connections, i.e. the ones that don't
claim the parent device.
Via the flags of the RTM_NEWROUTE netlink message, kernel and iproute2
support various variants to add a route.
- ip route add
- ip route change
- ip route replace
- ip route prepend
- ip route append
- ip route test
Previously, our nm_platform_ip4_route_add() function was basically
`ip route replace`. In the future, we should rather user `ip route
append` instead.
Anyway, expose the netlink message flags in the API. This allows to
use the various forms, and makes it also more apparent to the user that
they even exist.
- kernel ignores rtm_tos for IPv6 routes. While iproute2 accepts it,
let libnm reject TOS attribute for routes as well.
- move the tos field from NMPlatformIPRoute to NMPlatformIP4Route.
- the tos field is part of the weak-id of an IPv4 route. Meaning,
`ip route add` can add routes that only differ by their TOS.
The mss (advmss, RTA_METRICS.RTAX_ADVMSS) is in a way part of
the ID for IPv4 routes. That is, you can add multiple IPv4 routes, that
only differ by mss.
On the other hand, that is not the case for IPv6. Two IPv6 routes
that only differ by mss are considered the same.
Another issue is, that you cannot selectively delete an IPv4 route based
on the mss:
ip netns del x
ip netns add x
IP() {
ip netns exec x ip "$@"
}
IP link add type veth
IP link set veth0 name v
IP link set veth1 up
IP link set v up
IP route append 192.168.7.0/24 dev v advmss 6
IP route append 192.168.7.0/24 dev v advmss 7
IP -d route show dev v
IP route delete 192.168.7.0/24 dev v advmss 7
IP -d route show dev v
It seems for deleting routes, kernel ignores mss (which doesn't really
matter for IPv6, but does so for IPv4).
Refactor _nl_msg_new_route() to obtain the route scope (rtm_scope)
from the NMPObject, instead of a separate argument.
That way, when deleting an IPv4 route, we don't pick the first route
that matches (RT_SCOPE_NOWHERE), but use the actual scope of the route
that we want to delete. That matters, if there are more then one
otherwise identical routes that only differ by their scope.
For kernel, the scope of IPv6 routes is always global
(RT_SCOPE_UNIVERSE).
Also, during ip4_route_add() initialize the intermediate @obj to have
the values as we expect them after adding the route. That is necessary
to use it in _nl_msg_new_route(). But also nicer for consistency.
Also, move the scope_inv field in NMPlatformIP4Route to let the other
in_addr_t fields life side by side.
_nl_msg_new_route() should not get extra arguments, but instead
use all parameters from the NMPObject argument. This will allow
during nm_platform_ip_route_delete() to pick the exact route
that should be deleted.
Also, in ip4_route_add()/ip6_route_add(), keep the stack-allocated
@obj object consistent with what we expect to add. That is, set
the rt_source argument to the value of what the route will have
after kernel adds it. That might be necessary, because
do_add_addrroute() searches the cache for @obj.
Contrary to addresses, routes have no ID. When deleting a route,
you cannot just specify certain properties like network/plen,metric.
Well, actually you can specify only certain properties, but then kernel
will treat unspecified properties as wildcard and delete the first matching
route. That is not something we want, because we need to be in control which
exact route shall be deleted.
Also, rtm_tos *must* match. Even if we like the wildcard behavior,
we would need to pass TOS to nm_platform_ip4_route_delete() to be
able to delete routes with non-zero TOS. So, while certain properties
may be omitted, some must not. See how test_ip4_route_options() was
broken.
For NetworkManager it only makes ever sense to call delete on a route,
if the route is already fully known. Which means, we only delete routes
that we have already in the platform cache (otherwise, how would we know
that there is something to delete). Because of that, no longer have separate
IPv4 and IPv6 functions. Instead, have nm_platform_ip_route_delete() which
accepts a full NMPObject from the platform cache.
The code in core doesn't jet make use of this new functionality. It will
in the future.
At least, it fixes deleting routes with differing TOS.
The return value for the delete methods checks whether the object
is actually deleted. That is questionable behavior, because if the netlink
request succeeds, there is little point in checking with the platform cache.
As it is, it is racy.
Anyway, the previous value was totally wrong.
But it also uncovers another platform bug, which currently breaks
route tests. Will be fixed next.
and refactor NMFakePlatform to also track links via NMPCache.
For one, now NMFakePlatform also tests NMPCache, increasing the
coverage of what we care about.
Also, all our NMPlatform implementations now use NMPObject and NMPCache.
That means, we can expose those as part of the public API. Which is
great, because callers can keep a reference to the NMPObject object
and make use of generic functions like nmp_object_to_string().
And move some code from NMLinuxPlatform to NMPlatform, where it belongs.
The advantage is that we reuse (and test!) the NMPCache implementation for
tracking addresses.
Also, we now always expose proper NMPObjects from both linux and fake
platform.
For example,
obj = NMP_OBJECT_UP_CAST (nm_platform_ip4_address_get (...));
will work as expected. Also, the caller is now by NMPlatform API
allowed to take and keep a reference to the returned objects.
Routes and addresses don't implement cmd_obj_is_visible(),
hence they are always visible, and NMP_CACHE_ID_TYPE_OBJECT_TYPE_VISIBLE_ONLY
is identical to NMP_CACHE_ID_TYPE_OBJECT_TYPE.
Only link objects can be alive but invisible. Still, drop the index
for looking up visible links entirely. Let callers do the filtering,
if they care.
Implement the reference counting of NMPObject as part of
NMDedupMultiObj and get rid of NMDedupMultiBox.
With this change, the NMPObject is aware in which NMDedupMultiIndex
instance it is tracked.
- this saves an additional GSlice allocation for the NMDedupMultiBox.
- it is immediately known, whether an NMPObject is tracked by a
certain NMDedupMultiIndex or not. This saves an additional hash
lookup.
- previously, when all idx-types cease to reference an NMDedupMultiObj
instance, it was removed. Now, a tracked objects stays in the
NMDedupMultiIndex until it's last reference is deleted. This possibly
extends the lifetime of the object and we may reuse it better.
- it is no longer possible to add one object to more then one
NMDedupMultiIndex instance. As we anyway want to have only one
instance to deduplicate the objects, this is fine.
- the ref-counting implementation is now part of NMDedupMultiObj.
Previously, NMDedupMultiIndex could also track objects that were
not ref-counted. Hoever, the object anyway *must* implement the
NMDedupMultiObj API, so this flexibility is unneeded and was not
used.
- a downside is, that NMPObject grows by one pointer size, even if
it isn't tracked in the NMDedupMultiIndex. But we really want to
put all objects into the index for sharing and deduplication. So
this downside should be acceptable. Still, code like
nmp_object_stackinit*() needs to handle a larger object.
NMPlatform's cache should be directly accessible to the users,
at least the NMPLookup part and the fact that the cache contains
ref-counted, immutable NMPObjects.
This allows users to inspect the cache with zero overhead. Meaning,
they can obtain an NMDedupMultiHeadEntry and iterate the objects
themself. It also means, the are free to take and keep references
of the NMPObject instances (of course, without modifying them!).
NMFakePlatform will use the very same cache. The fake platform should
only differ when modifying the objects.
Another reason why this makes sense is because NMFakePlatform is for one
a test-stup but also tests behavior of platform itself. Using a separate
internal implementation for the caching is a pointless excecise, because
only the real NMPCache's implementation really matters for production.
So, either NMFakePlatform behaves idential, or it is buggy. Reuse it.
Port fake platform's tracking of routes to NMPCache and move duplicate
code from NMLinuxPlatform to the base class.
This commit only ports IP routes, eventually also addresses and links
should be tracked via the NMPCache instance.
We want to expose the NMPLookup and NMDedupMultiHeadEntry to the users
of NMPlatform, so that they can iterate the cache directly.
That means, NMPCache becames an integral part of NMPlatform's API
and must also be implemented by NMFakePlatform.
We want to move the multi_idx from NMLinuxPlatform to NMPlatform,
so that it can be used by NMFakePlatform as well. For that, we need
to know whether NMPlatform will use udev or not. Add a constrctor
property.
Rework platform object cache to use NMDedupMultiIndex.
Already previously, NMPCache used NMMultiIndex and had thus
O(1) for most operations. What is new is:
- Contrary to NMMultiIndex, NMDedupMultiIndex preserves the order of
the cached items. That is crucial to handle routes properly as kernel
will replace the first matching route based on network/plen/metric
properties. See related bug rh#1337855.
Without tracking the order of routes as they are exposed
by kernel, we cannot properly maintain the route cache.
- All NMPObject instances are now treated immutable, refcounted
and get de-duplicated via NMDedupMultiIndex. This allows
to have a global NMDedupMultiIndex that can be shared with
NMIP4Config and NMRouteManager. It also allows to share the
objects themselves.
Immutable objects are so much nicer. We can get rid of the
update pre-hook callback, which was required previously because
we would mutate the object inplace. Now, we can just update
the cache, and compare obj_old and obj_new after the fact.
- NMMultiIndex was treated as an internal of NMPCache. On the other
hand, NMDedupMultiIndex exposes NMDedupMultiHeadEntry, which is
basically an object that allows to iterate over all related
objects. That means, we can now lookup objects in the cache
and give the NMDedupMultiHeadEntry instance to the caller,
which then can iterate the list on it's own -- without need
for copying anything.
Currently, at various places we still create copies of lookup
results. That can be improved later.
The ability to share NMPObject instances should enable us to
significantly improve performance and scale with large number
of routes.
Of course there is a memory overhead of having an index for each list
entry. Each NMPObject may also require an NMDedupMultiEntry,
NMDedupMultiHeadEntry, and NMDedupMultiBox item, which are tracked
in a GHashTable. Optimally, one NMDedupMultiHeadEntry is the head
for multiple objects, and NMDedupMultiBox is able to deduplicate several
NMPObjects, so that there is a net saving.
Also, each object type has several indexes of type NMPCacheIdType.
So, worst case an NMPlatformIP4Route in the platform cache is tracked
by 8 NMPCacheIdType indexes, for each we require a NMDedupMultiEntry,
plus the shared NMDedupMultiHeadEntry. The NMDedupMultiBox instance
is shared between the 8 indexes (and possibly other).
We listen to all RTM_GETLINK messages to get updates on interfaces statuses.
Unfortunately wireless code in the kernel sends those messages with wireless information included
and all other information excluded. When we receive such message we wipe out our valid cached entry
with new object that is almost empty because netlink message didn't contain any information.
Solution to this is to check that incoming message contains MTU field: this field is always
set for complete messages about interfaces and is not set by wireless code.
Signed-off-by: Nikolay Martynov <mar.kolya@gmail.com>
https://github.com/NetworkManager/NetworkManager/pull/17
NMPlatform, NMRouteManager and NMDefaultRouteManager are singletons
instances. Users of those are for example NMDevice, which registers
to GObject signals of both NMPlatform and NMRouteManager.
Hence, as NMDevice:dispose() disconnects the signal handlers, it must
ensure that those singleton instances live longer then the NMDevice
instance. That is usually accomplished by having users of singleton
instances own a reference to those instances.
For NMDevice that effectively means that it shall own a reference to
several singletons.
NMPlatform, NMRouteManager, and NMDefaultRouteManager are all
per-namespace. In general it doesn't make sense to have more then
one instances of these per name space. Nnote that currently we don't
support multiple namespaces yet. If we will ever support multiple
namespaces, then a NMDevice would have a reference to all of these
manager instances. Hence, introduce a new class NMNetns which bundles
them together.
(cherry picked from commit 0af2f5c28b)
Platform's add/remove operations accept a "network" argument.
Kernel requires that the host part (based on plen) is all zero.
For NetworkManager we are more resilient to user configuration.
Cleanup the input argument already before calling _nl_msg_new_route().
Note that we use the same "network" argument to construct a obj_id
instance and to find the route in the cache (do_add_addrroute()).
Without cleaning the host part, the added object cannot be found
and the add-route command seemingly fails.
(cherry picked from commit 11d8c41898)