These should be logged on DEBUG level:
<warn> platform-linux: do-change-link[2]: failure changing link: failure 97 (Address family not supported by protocol)
<warn> device (wlo1): failed to enable userspace IPv6LL address handling (unspecified)
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/issues/10
As of upstream kernel v4.18-rc8.
Note that we name the features like they are called in ethtool's
ioctl API ETH_SS_FEATURES.
Except, for features like "tx-gro", which ethtool utility aliases
as "gro". So, for those features where ethtool has a built-in,
alternative name, we prefer the alias.
And again, note that a few aliases of ethtool utility ("sg", "tso", "tx")
actually affect more than one underlying kernel feature.
Note that 3 kernel features which are announced via ETH_SS_FEATURES are
explicitly exluded because kernel marks them as "never_changed":
#define NETIF_F_NEVER_CHANGE (NETIF_F_VLAN_CHALLENGED | \
NETIF_F_LLTX | NETIF_F_NETNS_LOCAL)
Also, add two more features "tx-tcp-segmentation" and
"tx-tcp6-segmentation". There are two reasons for that:
- systemd-networkd supports setting these two features,
so lets support them too (apparently they are important
enough for networkd).
- these two features are already implicitly covered by "tso".
Like for the "ethtool" program, "tso" is an alias for several
actual features. By adding two features that are already
also covered by an alias (which sets multiple kernel names
at once), we showcase how aliases for the same feature can
coexist. In particular, note how setting
"tso on tx-tcp6-segmentation off" will behave as one would
expect: all 4 tso features covered by the alias are enabled,
except that particular one.
There is little difference in practice because there is only one caller.
Still re-use the SocketHandle also for mii. If only, to make it clear
that SocketHandle is not only suitable for ethtool, but also mii.
Previously, each call to ethtool_get() would resolve the ifindex and
create a new socket for the ethtool request.
This is partly done, because ethtool only supports making requests by
name. Since interfaces can be renamed, this is inherrently racy. So,
we want to fetch the latest name shortly before making the request.
Some functions like nmp_utils_ethtool_supports_vlans() require multiple
ioctls. And next, we will introduce more ethtool functions, that make an
even larger number of individual requests.
Add a simple SocketHandle struct, to create the socket once and reuse
it for multiple requests. This is still entirely internal API in
"nm-platform-utils.c".
ethtool_get_stringset() will be used later, independently.
Also, don't trust and ensure that the block of strings
returned by ETHTOOL_GSTRINGS are NUL terminated.
Note that in NetworkManager API (D-Bus, libnm, and nmcli),
the features are called "feature-xyz". The "feature-" prefix
is used, because NMSettingEthtool possibly will gain support
for options that are not only -K|--offload|--features, for
example -C|--coalesce.
The "xzy" suffix is either how ethtool utility calls the feature
("tso", "rx"). Or, if ethtool utility specifies no alias for that
feature, it's the name from kernel's ETH_SS_FEATURES ("tx-tcp6-segmentation").
If possible, we prefer ethtool utility's naming.
Also note, how the features "feature-sg", "feature-tso", and
"feature-tx" actually refer to multiple underlying kernel features
at once. This too follows what ethtool utility does.
The functionality is not yet implemented server-side.
Parsing can be complicated enough. It's simpler to just work
top-to-bottom, without calling various helper functions. This was,
you can see all the code in one place, without need to jump to
the helper function to see what it is doing.
In general, a static function that is only called once, does sometimes
not simplify but obfuscate the code.
The tests already honored the environment variable $NMTST_IFCFG_RH_UPDATE_EXPECTED
to indicate that the .cexpected files should be written by the tests.
However, in the meantime, we instead use NM_TEST_REGENERATE=1 at various
places for this purpose. Honor that flag as well.
When assuming existing connections, allow the same connection to be
activated on a different device if the connection is multi-connect
capable. Otherwise, when a connection is active on multiple devices
and NM is restarted, we assume only the first instance, and create
in-memory connections for others.
In general, a activatable connection is one that is currently not
active, or supports to be activatable multiple times according to
multi-connect setting. In addition, during autoconnect, a profile
which is marked as multi-connect=manual-multiple will not be avalable.
Hence, add an argument "for_auto_activation".
The code is mostly unused but will be used next (except for connections,
which set connection.multi-connect=multiple).
Add a new option that allows to activate a profile multiple times
(at the same time). Previoulsy, all profiles were implicitly
NM_SETTING_CONNECTION_MULTI_CONNECT_SINGLE, meaning, that activating
a profile that is already active will deactivate it first.
This will make more sense, as we also add more match-options how
profiles can be restricted to particular devices. We already have
connection.type, connection.interface-name, and (ethernet|wifi).mac-address
to restrict a profile to particular devices. For example, it is however
not possible to specify a wildcard like "eth*" to match a profile to
a set of devices by interface-name. That is another missing feature,
and once we extend the matching capabilities, it makes more sense to
activate a profile multiple times.
See also https://bugzilla.redhat.com/show_bug.cgi?id=997998, which
previously changed that a connection is restricted to a single activation
at a time. This work relaxes that again.
This only adds the new property, it is not used nor implemented yet.
https://bugzilla.redhat.com/show_bug.cgi?id=1555012
For dynamic IP methods (DHCP, IPv4LL, WWAN) the route metric is set at
activation/renewal time using the value from static configuration. To
support runtime change we need to update the dynamic configuration in
place and tell the DHCP client the new value to use for future
renewals.
https://bugzilla.redhat.com/show_bug.cgi?id=1528071
This is a mere debugging convenience thing: e.g. if you run, but want to
check whether nm-applet or nmcli agent works fine, it's convenient that
the agent you run later gets a chance to deal with the secrets requests
first.
Is seems to do the job and is simpler that adding some more complicated
policy (e.g. introducing priorities or something).
https://github.com/NetworkManager/NetworkManager/pull/174
Update the device state file every time the device is connected,
disconnected, or becomes unmanaged. In this way, NM becomes more
robust against crashes or forced terminations because it can resume
the previous device state seamlessly.
Rename nm_manager_write_device_state() to
nm_manager_write_device_state_all(), and split out the code to write a
single device state to a new function.
When the IPv4 method is 'auto' and there are static addresses
configured in the connection, start a DAD probe for the static
addresses and apply them immediately on success, without waiting for
DHCP to complete.
Note that if the static address is in the same subnet of the DHCP one,
when we add the DHCP address we want it to be primary and so we will
remove the static address temporarily to achieve the right order of
addresses.
https://bugzilla.redhat.com/show_bug.cgi?id=1369905
Some properties in NetworkManager's D-Bus API are IPv4 addresses
in network byte order (big endian). That is problematic.
It is no problem, when the NetworkManager client runs on the same
host. That is the case with libnm, which does not support to be used
remotely for the time being.
It is a problem for an application that wants to access the D-Bus
interface of NetworkManager remotely. Possibly, such an application
would be implemented in two layers:
- one layer merely remotes D-Bus, without specific knowledge of
NetworkManager's API.
- a higher layer which accesses the remote D-Bus interface of NetworkManager.
Preferably it does so in an agnostic way, regardless of whether it runs
locally or remotely.
When using a D-Bus library, all accesses to 32 bit integers are in
native endianness (regardless of how the integer is actually encoded
on the lower layers). Likewise, D-Bus does not support annotating integer
types in non-native endianness. There is no way to annotate an integer
type "u" to be anything but native order.
That means, when remoting D-Bus at some point the endianness must be
corrected.
But by looking at the D-Bus introspection alone, it is not possible
to know which property need correction and which don't. One would need
to understand the meaning of the properties.
That makes it problematic, because the higher layer of the application,
which knows that the "Nameservers" property is supposed to be in network
order, might not easily know, whether it must correct for endianness.
Deprecate IP4Config properties that are only accessible with a particular
endianness, and add new properties that expose the same data in an
agnostic way.
Note that I added "WinsServerData" to be a plain "as", while
"NameserverData" is of type "aa{sv}". There is no particularly strong
reason for these choices, except that I could imagine that it could be
useful to expose additional information in the future about nameservers
(e.g. are they received via DHCP or manual configuration?). On the other
hand, WINS information likely won't get extended in the future.
Also note, libnm was not modified to use the new D-Bus fields. The
endianness issue is no problem for libnm, so there is little reason to
change it (at this point).
https://bugzilla.redhat.com/show_bug.cgi?id=1153559https://bugzilla.redhat.com/show_bug.cgi?id=1584584
nm_gobject_notify_together() freezes the notifications to emit both
notification signals together. That matters for NMDBusObject base
class, which hooks into dispatch_properties_changed() to emit a combined
"PropertiesChanged" signal.
Note, that during calls like nm_ip4_config_replace(), we already
froze/thawed the notifications. So, this change adds unnecessary
freeze/thaw calls, because signal emition is already frozen.
That is a bit ugly, because g_object_freeze_notify() is more heavy
than I'd wish it would be.
Anyway, for other places, like nm_ip4_config_reset_routes() that is
not the case. And correctness trumps performance.
Ultimately, the issue there is that we use NMIP4Config / NMIP6Config
both to track internal configuration, and to expose it on D-Bus.
The majority of created NMIP4Config / NMIP6Config instances won't get
exported, and but still pay an unnecessary overhead. The proper solution
to minimize the overhead would be, to separate these uses.
It seems, curl_multi_socket_action() can fail with
connectivity check failed: 4
where "4" means CURLM_INTERNAL_ERROR.
When that happens, it also seems that the file descriptor may still have data
to read, so the glib IO callback _con_curl_socketevent_cb() will be called in
an endless loop. Thereby, keeping the CPU busy with doing nothing (useful).
Workaround by disabling polling on the file descriptor when something
goes wrong.
Note that optimally we would cancel the affected connectivity-check
right away. However, due to the design of libcurl's API, from within
_con_curl_socketevent_cb() we don't know which connectivity-checks
are affected by a failure on this file descriptor. So, all we can do
is avoid polling on the (possibly) broken file descriptor. Note that
we anyway always schedule a timeout of last resort for each check. Even
if something goes very wrong, we will fail the check within 15 seconds.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=903996
On non-Windows, libcurl's "curl_socket_t" type is just a typedef for
int. We rely on that, because we use it as file descriptor.
Add a compile time check to ensure that.
Since this is "C" there are not namespaces and libraries commonly choose
a particular name prefix for their symbols.
In case of libcurl, that is "curl_".
We should avoid using the same name prefix, and choose something distinct.
Before:
$ nmcli connection up my-wired
Error: Connection activation failed: No suitable device found for this connection.
After:
$ nmcli connection up my-wired
Error: Connection activation failed: No suitable device found for this connection (device eth0 not available because device has no carrier).
This relies on nm_manager_get_best_device_for_connection() giving a
suitable error. That is however a bit complicated, because if no
suitable device is found, it's not immediately clear what is the
exact reason. E.g. if you try to activate a Wi-Fi profile, the
failure reason
"SSID is not visible"
is better than
"Wi-Fi profile cannot activate on ethernet device".
This is controlled by carefully setting the failure codes
NM_UTILS_ERROR_CONNECTION_AVAILABLE_* to indicate an absolute
relevance of the failure. And subsequently, by selecting the failure
with the highest relevance. This might still need some improvements,
for example by reordering checks (so that more relevant failures
are handled first) and tweaking the error relevance.
Before:
$ nmcli connection up my-wired ifname eth0
Error: Connection activation failed: Connection 'my-wired' is not available on the device eth0 at this time.
After:
$ nmcli connection up my-wired ifname eth0
Error: Connection activation failed: Connection 'my-wired' is not available on device eth0 because device has no carrier