This signal notifies about the "sharing state", that's it, when there
is at least one shared connection active or not. Each device informs
to nm_manager when a shared connection is activated or deactivated
and nm_manager emits this signal when the first shared connection is
activated or the last one is deactivated.
For now we're only interested in IPv4 forwarding as it's the only one
that we need to track from nm_device (in following commits).
Fixes: a8a2e6d727 ('ip-config: Support configuring per-device IPv4 sysctl forwarding option')
The flag is used for both sleeping and networking disabled conditions.
This is because internally they share logic, but it's not obvious for
users and it has caused confusion in the past when investigating why
devices didn't become managed. Make it explicit that it can be because
of either reason.
It would be better to create two separate flags, actually, and it
doesn't seem complex, but better not to risk introducing bugs for that
little benefit.
Logs before:
device (enp4s0): state change: disconnected -> unmanaged (reason 'unmanaged-sleeping' ...
Logs before:
device (enp4s0): state change: disconnected -> unmanaged (reason 'unmanaged-nm-disabled' ...
When we disable networking with `nmcli networking off` the reason that
is logged is "sleeping". Explain instead that networking is disabled.
Before:
device (lo): state change: activated -> deactivating (reason 'sleeping' ...
After:
device (lo): state change: activated -> deactivating (reason 'networking-off' ...
When we do `nmcli networking off` it's shown as state "sleeping". This
is confusing, and the only reason is that we share internally code to
handle both situations in a similar way.
Rename the state to the more generic name "disabled", situation that can
happen either because of sleeping or networking off.
Clients cannot differentiate the exact reason only with the NMState value,
but better that they show "network off" as this is the most common reason
that they will be able to display. If the system is suspending, there will
be only a short period of time that they can show the state, and showing
"network off" is not wrong because that's what NM has done as a response
to suspend.
In the logs, let's make explicit the exact reason why state is changing
to DISABLED: sleeping or networking off.
Logs before:
manager: disable requested (sleeping: no enabled: yes)
manager: NetworkManager state is now ASLEEP
Logs after:
manager: disable requested (sleeping: no enabled: yes)
manager: NetworkManager state is now DISABLED (NEWORKING OFF)
State before:
$ nmcli general
STATE ...
asleep ...
State after:
$ nmcli general
STATE ...
network off ...
Add a new capability to indicate that NetworkManager supports the
"sriov.preserve-on-down" connection property. With this, clients can
set the property only when supported, without the risk of creating an
invalid connection.
(cherry picked from commit 8e40f7e289)
This commit adds NM_VERSION_INFO_CAPABILITY_IPV4_FORWARDING to the
VersionInfo D-Bus property, allowing clients such as nmstate to check
the NetworkManager's support of configuring per-device IPv4 sysctl
forwarding setting directly via the capabilities bitmask instead of
relying on the NetworkManager version comparisons.
(cherry picked from commit 6a13e8d369)
This is done so that AddAndActivate() will return sensible errors in a
future patch that makes it support creating virtual devices.
In effect, all errors are logged in one place, therefore the log levels
are different. I don't think we're losing anything of value by being
a little less verbose here.
(cherry picked from commit 45d82f720c)
Preventing the activation of unavailable devices for all device types is
too aggresive and leads to race conditions, e.g when a non-virtual bond
port gets a carrier, preventing the device to be a good candidate for
the connection.
Instead, enforce this check only on OVS interfaces as NetworkManager
just makes sure that ovsdb->ready is set to TRUE.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/2139
Fixes: 774badb151 ('core: prevent the activation of unavailable devices')
(cherry picked from commit a1c05d2ce6)
When autoconnecting ports of a controller, we look for all candidate
(device,connection) tuples through the following call trace:
-> autoconnect_ports()
-> find_ports()
-> nm_manager_get_best_device_for_connection()
-> nm_device_check_connection_available()
-> _nm_device_check_connection_available()
The last function checks that a specific device is available to be
activated with the given connection. For virtual devices, it only
checks that the device is compatible with the connection based on the
device type and characteristics, without considering any live network
information.
For OVS interfaces, this doesn't work as expected. During startup, NM
performs a cleanup of the ovsdb to remove entries that were previously
added by NM. When the cleanup is terminated, NMOvsdb sets the "ready"
flag and is ready to start the activation of new OVS interfaces. With
the current mechanism, it is possible that a OVS-interface connection
gets activated via the autoconnect-ports mechanism without checking
the "ready" flag.
Fix that by also checking that the device is available for activation.
Rename "unavailable_devices" to "exclude_devices", as the
"unavailable" term has a specific, different meaning in NetworkManager
(i.e. the device is in the UNAVAILABLE state). Also, use
nm_g_hash_table_contains() when needed.
If the connection didn't exist in advance, there's no unrealized device,
and find_device_by_iface() is not going to get us one.
Call system_create_virtual_device() afrer nm_utils_complete_generic()
completes the connection for virtual devices. Make sure we do proper
cleanup if we happen to fail the activation later, so that de device
doesn't end up hanging there.
Make validate_activation_request() only do the validation -- split the
determination of the device into find_device_for_activation().
The point of this is to be able complete the connection and actually
create a virtual device after the validation.
I believe this is also somewhat easier to follow now that the procedure
does what its name says.
The point is to get rid of device/connection type specific arguments, to
eventually be able to complete the connection on AddAndActivate before knowing
which factory is going to take care of creating the device.
Aside from that, the whole thing is pretty awful -- with complicated
macros and variadic argument (ugh). Let's get rid of that.
By default, on reapply we were only syncing the main routes table. This
causes that routes added by NM to other tables are not removed on
reapply. This was done to preserve routes added externally, but routes
added by NM itself should be removed.
Add a new route table syncing mode "main + NM routes". This mode
maintains the normal behaviour of syncing completely the main table,
and for other tables removes only routes that were added by us, leaving
the rest untouched. Use this mode by default, as this is what a user
would expect on reapply.
Note: this might not work if NM is restarted between the profile being
modified and the reapply, because NM forgets what routes were added by
itself because of the restart. This is a rare corner case, though.
Use the D-Bus property "VersionInfo" to expose a capability flag
indicating that this bug is fixed. It is the first capability that we
expose in this way. However, it is convenient to do it this way as it's
something that clients like nmstate needs to know, so they can decide
whether a conn down is needed or not. It is not enough to decide that by
version number because it might be fixed via a downstream patch in distros
like RHEL.
https://issues.redhat.com/browse/RHEL-67324https://issues.redhat.com/browse/RHEL-66262
Fixes: e9c17fcc9b ('l3cfg: default to 'main' route table sync mode')
Managed type = managed is a bit unclear, because all managed types are
for devices that are managed, but with different levels. Managed type =
managed could be interpreted as other types are unmanaged. Change it to
managed type = full.
As part of the conscious language effort we must provide an alternative
option to configure autoconnect-ports system-wide on NetworkManager
configuration file.
When activating a port with its controller deactivating by new
activation, NM will register `state-change` signal waiting controller to
have new active connections. Once controller got new active connection,
the port will invoke `nm_active_connection_set_controller()` which lead
to assert error on
g_return_if_fail(!nm_dbus_object_is_exported(NM_DBUS_OBJECT(self)))
because this active connection is already exposed as DBUS object.
To fix the problem, we remove the restriction on controller been
write-only and notify DBUS object changes for controller property.
Signed-off-by: Gris Ge <fge@redhat.com>
Connection timestamps are updated (saved to disk) on connection up and
down. This way, the last used connection will take precedence for
autoconnect if they have the same priority.
But as we don't actually do connection down when NM stops, the last
connection timestamp of all active connections is the timestamp of when
they were brought up. Then, the activation order might be wrong on next
start.
One case where timestamps are wrong (although it is not clear how
important it is because the connections are activated on different
interfaces):
1. Activate con1 <- timestamp updated
2. Activate con2 <- timestamp updated
3. Deactivate con2 <- timestamp updated
4. Stop NM <- timestamp of con2 is higher than con1, but con1 was still
active when con2 was brought down.
Other case that is reproducible (from
https://issues.redhat.com/browse/RHEL-35539):
1. Activate con1
2. Activate con2 on same interface:
- As a consequence con1 is deactivated and its timestamp updated
- The timestamp of con2 is also updated
3. Stop NM <- timestamp of con1 and con2 is the same, next activation
order will be undefined.
Fix by saving the timestamps on NM shutdown.
Problem:
Given a OVS port with `autoconnect-ports` set to default or false,
when reactivation required for checkpoint rollback,
previous activated OVS interface will be in deactivate state after
checkpoint rollback.
The root cause:
The `activate_stage1_device_prepare()` will mark the device as
failed when controller is deactivating or deactivated.
In `activate_stage1_device_prepare()`, the controller device is
retrieved from NMActiveConnection, it will be NULL when NMActiveConnection
is in deactivated state. This will cause device been set to
`NM_DEVICE_STATE_REASON_DEPENDENCY_FAILED` which prevent all follow
up `autoconnect` actions.
Fix:
When noticing controller is deactivating or deactivated with reason
`NM_DEVICE_STATE_REASON_NEW_ACTIVATION`, use new function
`nm_active_connection_set_controller_dev()` to wait on controller
device state between NM_DEVICE_STATE_PREPARE and
NM_DEVICE_STATE_ACTIVATED. After that, use existing
`nm_active_connection_set_controller()` to use new
NMActiveConnection of controller to move on.
Resolves: https://issues.redhat.com/browse/RHEL-31972
Signed-off-by: Gris Ge <fge@redhat.com>
Fix the following:
NetworkManager: file ../src/libnm-core-impl/nm-connection.c: line 321 (nm_connection_get_setting): should not be reached
NetworkManager.service: Main process exited, code=dumped, status=5/TRAP
Fixes: bd38a19832 ('connection: add support to down-on-poweroff')
While enumerating devices at startup, we take a snapshot of existing
links from platform and we start creating device instances for
them. It's possible that in the meantime, while processing netlink
events in platform_link_added(), a link gets renamed. If that happens,
then we have two different views of the same ifindex: the cached link
from `links` and the link in platform.
This can cause issues: in platform_link_added() we create the device
with the cached name; then in NMDevice's constructor(), we look up
from platform the ifindex for the given name. Because of the rename,
this lookup can match a newly created, different link.
The end result is that the ifindex from the initial snapshot doesn't
get a NMDevice and is not handled by NetworkManager.
Fix this problem by fetching the latest version of the link from
platform to make sure we have a consistent view of the state.
https://issues.redhat.com/browse/RHEL-25808https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1897
When creating VLAN over OVS internal interface which holding the same
name as its controller OVS bridge, NetworkManager will fail with error:
Error: Connection activation failed: br0.101 failed to create
resources: cannot retrieve ifindex of interface br0 (Open vSwitch
Bridge)
Expanded the `find_device_by_iface()` with additional argument
`child: NmConnection *` which will validate whether candidate is
suitable to be parent device.
In `nm_device_check_parent_connection_compatible()`, we only not allow OVS
bridge and OVS port being parent.
Resolves: https://issues.redhat.com/browse/RHEL-26753
Signed-off-by: Gris Ge <fge@redhat.com>
With `NM_CHECKPOINT_CREATE_FLAG_TRACK_INTERNAL_GLOBAL_DNS` flag set on
checkpoint creation, the checkpoint rollback will restore the
global DNS in internal configure file
`/var/lib/NetworkManager/NetworkManager-intern.conf`.
If user has set global DNS in /etc folder, this flag will not take any
effect.
Resolves: https://issues.redhat.com/browse/RHEL-23446
Signed-off-by: Gris Ge <fge@redhat.com>
The new option at NMSettingConnection allow the user to specify if the
connection needs to be down when powering off the system. This is useful
for IP address removal prior powering off. In order to accomplish that,
we listen on "Shutdown" systemd DBus signal.
The option is set to FALSE by default, it can be specified globally on
configuration file or per profile.
The code that is adding the devices to the sleeping list and taking them
down should be moved to a separated function. This way we can reuse it
and we avoid duplicating code.
In order to provide the NMSleepMonitor a more generic usage, let's
rename the whole module to NMPowerMonitor. Nothing is exposed to the API
so it is a trivial renaming.
If a generic device is present and the name matches, it is compatible
with any link type.
For example, if a generic connection has a device-handler that creates
a dummy interface, the link is compatible with the NMDeviceGeneric.
(cherry picked from commit 5978fb2b27)
When a generic connection has a custom device-handler, it always
generates a NMDeviceGeneric, even when the link that gets created is
of a type natively supported by NM. On service restart, we need to
keep track that the device is generic or otherwise a different device
type will be instantiated.
(cherry picked from commit f2613be150)
Mark the methods/properties deprecated in the D-Bus API (via
org.freedesktop.DBus.Introspectable.Introspect(), [1]).
It affects those properties that are documented as deprecated in
introspection XML.
$ busctl -j call \
org.freedesktop.NetworkManager \
/org/freedesktop/NetworkManager \
org.freedesktop.DBus.Introspectable \
Introspect | \
jq '.data[0]' -r | \
grep -5 Deprecated
[1] https://dbus.freedesktop.org/doc/dbus-specification.html#standard-interfaces-introspectable
This option was only introduced only to allow keeping the old behavior
in RHEL7, while the default order was changed from 'ifindex' to 'name'
in RHEL8. The usefulness of this option is questionable, as 'name'
together with predictable interface names should give predictable order.
When not using predictable interface names, the name is unpredictable
but so is the ifindex.
https://issues.redhat.com/browse/NMT-926https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1814
Remove all the code that was added for the CSME coexistence.
The Intel WiFi team can't commit on when, if at all, this feature will
be completely integrated and tested in the NetworkManager.
The preferred solution for now is the solution that involves the kernel
only.
Remove the code that was merged so far.
The device authentication request is an async process, it can not know
the answer right away, it is not guarantee that device is still
exported on D-Bus when authentication finishes. Thus, do not return
SUCCESS and abort the authentication request when device is not alive.
https://bugzilla.redhat.com/show_bug.cgi?id=2210271