When a WG connection is connecting to an IPv6 endpoint, configures a
default route, and firewalld is active with IPv6_rpfilter=yes, it never
handshakes and doesn't pass traffic. This is because firewalld has a
IPv6 reverse path filter which is discarding these packets.
Thus, we add some firewall rules whenever a WG connection is brought up
that ensure the conntrack mark and packet mark are copied over.
These rules are largely inspired by wg-quick:
https://git.zx2c4.com/wireguard-tools/tree/src/wg-quick/linux.bash?id=17c78d31c27a3c311a2ff42a881057753c6ef2a4#n221
DBus errors were not properly handled after DBus calls and
that caused SIGSEGV. Now they are checked.
Fixes#1738
Fixes: b8714e86e4 ('dns: introduce configuration_serial support to the dnsconfd plugin')
Prevents NetworkManager from trying to determine the
transient hostname via DHCP or other means if "localhost"
is already configured as a static hostname, as the transient
hostname will be ignored by hostnamed if a static hostname
has already been set.
This is done so that AddAndActivate() will return sensible errors in a
future patch that makes it support creating virtual devices.
In effect, all errors are logged in one place, therefore the log levels
are different. I don't think we're losing anything of value by being
a little less verbose here.
Domains are exported via D-Bus and so they must be valid UTF-8.
RFC 1035 specifies that domain labels can contain any 8 bit values,
but also recommends that they follow the "preferred syntax" which only
allows letters, digits and hypens.
Don't introduce a strict validation of the preferred syntax, but at
least discard non UTF-8 search domains, as they will cause assertion
failures later when they are sent over D-Bus.
Skip the configuration of the MPTCP endpoint when the address is in
the l3cd but is not yet configured in the platform. This typically
happens when IPv4 DAD is enabled and the address is being probed.
If we configure the endpoint without the address set, the kernel will
try to use the endpoint immediately but it will fail. Then, the
endpoint will not be used ever again after the address is added.
The name suggests that the function always removes all the watchers
with the given tag; instead it removes only "dirty" ones when the
"all" parameter is FALSE. Split the function in two variants.
When Dnsconfd service is enabled but not started, NetworkManager
should attempt to start it through DBus at least once.
Fixes: c6e1925dec ('dns: Add dnsconfd DNS plugin')
Preventing the activation of unavailable devices for all device types is
too aggresive and leads to race conditions, e.g when a non-virtual bond
port gets a carrier, preventing the device to be a good candidate for
the connection.
Instead, enforce this check only on OVS interfaces as NetworkManager
just makes sure that ovsdb->ready is set to TRUE.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/2139
Fixes: 774badb151 ('core: prevent the activation of unavailable devices')
Stop passing "connection-*" entries in the update method to
dnsconfd. The plugin tries to determine the connection from the
ifindex, but it's not possible to do it right at the moment because
the same ifindex can be used at the same time e.g. by a policy-based
VPN like ipsec and a normal device. Instead, it should be NM that
explicitly passes the information about the connection to the DNS
plugin. Anyway, these variables are not used at the moment by
dnsconfd.
Fixes: c6e1925dec ('dns: Add dnsconfd DNS plugin')
After every state change of the plugin there should be an invocation
of _nm_dns_plugin_update_pending_maybe_changed() to re-evaluate
whether we are waiting for an update. send_dnsconfd_update() doesn't
change the state and so there is need to check again afterwards.
When calling activate_port_or_children_connections() we are unblocking
the ports and children but we are not resetting the number of retries if
it is an internal activation.
This is wrong as even if it's an internal activation the number of
retries should be reset. It won't interferfe with other blocking reasons
like USER_REQUESTED or MISSING_SECRETS.
When autoconnecting ports of a controller, we look for all candidate
(device,connection) tuples through the following call trace:
-> autoconnect_ports()
-> find_ports()
-> nm_manager_get_best_device_for_connection()
-> nm_device_check_connection_available()
-> _nm_device_check_connection_available()
The last function checks that a specific device is available to be
activated with the given connection. For virtual devices, it only
checks that the device is compatible with the connection based on the
device type and characteristics, without considering any live network
information.
For OVS interfaces, this doesn't work as expected. During startup, NM
performs a cleanup of the ovsdb to remove entries that were previously
added by NM. When the cleanup is terminated, NMOvsdb sets the "ready"
flag and is ready to start the activation of new OVS interfaces. With
the current mechanism, it is possible that a OVS-interface connection
gets activated via the autoconnect-ports mechanism without checking
the "ready" flag.
Fix that by also checking that the device is available for activation.
Rename "unavailable_devices" to "exclude_devices", as the
"unavailable" term has a specific, different meaning in NetworkManager
(i.e. the device is in the UNAVAILABLE state). Also, use
nm_g_hash_table_contains() when needed.
The current mess of code seems like a hodgepodge of complex ideas,
partially copied from systemd, but then subtly different, and it's a
mess. Let's simplify this drastically.
First, assume that getrandom() is always available. If the kernel is too
old, we have an unoptimized slowpath for still supporting ancient
kernels, a path that should be removed at some point. If getrandom()
isn't available and the fallback path doesn't work, the system has much
larger problems, so just crash. This should basically never happen.
getrandom() and having randomness available in general is a critical
system API that should be expected to be available on any functioning
system.
Second, assume that the rng is initialized, so that asking for random
numbers should never block. This is virtually always true on modern
kernels. On ancient kernels, it usually becomes true. But, more
importantly, this is not the responsibility of various daemons, even
ones that run at boot. Instead, this is something for the kernel and/or
init to ensure.
Putting these together, we adopt new behavior:
- First, try getrandom(..., ..., 0). The 0 flags field means that this
call will only return good random bytes, not insecure ones.
- If this fails for some reason that isn't ENOSYS, crash.
- If this fails due to ENOSYS, poll on /dev/random until 1 byte is
available, suggesting that subsequent reads from the rng will almost
have good random bytes. If this fails, crash. Then, read from
/dev/urandom. If this fails, crash.
We don't bother caching when getrandom() returns ENOSYS. We don't apply
any other fancy optimizations to the slow fallback path. We keep that as
barebones and minimal as we can. It works. It's for ancient kernels. It
should be removed soon. It's not worth spending cycles over. Instead,
the goal is to eventually reduce all of this down to a simple boring
call to getrandom(..., ..., 0).
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/2127
Add the DNS routing rules explicitly instead of tracking them via the
NMGlobalTracker mechanism. Since we do not plan to ever remove them,
there is no reason to track the rules. Also, the current
implementation is buggy because in some situations the rules are
wrongly removed when they should not.
Fixes: bf3ecd9031 ('l3cfg: fix DNS routes')
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/2125
"configuration_serial" dbus property ensures that the plugin
can mark update 'not pending' when the update is trully finished.
This mechanism exists because of underlying problem of having
to restart, or perform similarly time consuming operation, to change
certain configuration parameters of resolver. If Dnsconfd would
block the update call until the update is finished, we could not
respond to any other requests until the call is finished.
Resolve-mode allows user to specify way how the global-dns domains
and DNS connection information should be merged and used.
Certification-authority allows user to specify certification
authority that should be used to verify certificates of encrypted
DNS servers.
If the connection didn't exist in advance, there's no unrealized device,
and find_device_by_iface() is not going to get us one.
Call system_create_virtual_device() afrer nm_utils_complete_generic()
completes the connection for virtual devices. Make sure we do proper
cleanup if we happen to fail the activation later, so that de device
doesn't end up hanging there.
Make validate_activation_request() only do the validation -- split the
determination of the device into find_device_for_activation().
The point of this is to be able complete the connection and actually
create a virtual device after the validation.
I believe this is also somewhat easier to follow now that the procedure
does what its name says.
The point is to get rid of device/connection type specific arguments, to
eventually be able to complete the connection on AddAndActivate before knowing
which factory is going to take care of creating the device.
Aside from that, the whole thing is pretty awful -- with complicated
macros and variadic argument (ugh). Let's get rid of that.
All of these are wrong asserting that a connection has a particular
setting. On AddAndActivate, the connection can be pretty much empty:
impl_manager_add_and_activate_connection ()
validate_activation_request ()
nm_manager_get_best_device_for_connection ()
iface = nm_manager_get_connection_iface ()
find_parent_device_for_connection ()
nm_device_factory_get_connection_parent () <====== *shriek*
nm_device_factory_get_connection_iface ()
find_device_by_iface (iface)
nm_device_complete_connection ()
Remove those assertions.
Some of them are wrong: they assert a connection has a particular
setting even though this can be called on AddAndActivate against a
connection that is not complete or normalized:
impl_manager_add_and_activate_connection ()
validate_activation_request ()
nm_manager_get_best_device_for_connection ()
iface = nm_manager_get_connection_iface ()
find_parent_device_for_connection ()
nm_device_factory_get_connection_parent ()
nm_device_factory_get_connection_iface () <====== here
find_device_by_iface (iface)
nm_device_complete_connection ()
Fix those by removing the assertions.
Some of them are also fall back to just calling
nm_connection_get_interface_name() which is a pretty useless thing to do
because nm_device_factory_get_connection_iface() only calls the
device-specific routine if nm_device_factory_get_connection_iface()
doesn't return anything, to give the factory a chance to make up a name
(like <parent>.<vlan-id> for Vlan) on its own. Drop those.
The current approach is flawed. During a commit of the L3
configuration we do a RTM_GETROUTE to find the next-hop to the DNS
server on the current interface, in order to create the DNS route to
inject into the l3cd. However, we haven't added routes to kernel yet
and so the result of the RTM_GETROUTE is going to be wrong.
In some cases, for example when IPv4 DAD is enabled, the bug can't be
easily noticed because we perform multiple commits for the interface,
and the regular routes are already set in kernel from the 2nd commit
on.
To fix the problem, do the following: during a commit we first add
addresses and routes to platform. Then, we create a list of DNS routes
to configure, we collect the old DNS routes, and do a comparison. If
they changed, we need to add the DNS routes to platform in a 2nd step.
Note that in the previous approach we tracked the routes in the
committed-l3cd object of the l3cfg, and so they were applied to kernel
automatically. Because of the 2-step requirement, that no longer works
and we must apply the DNS routes manually.
Fixes: 5449b18a94 ('core: support automatically adding DNS routes')