There is only one caller, hence it's simpler to see it all in one place.
I prefer this, because then I can read the code top to bottom and
see what's happening, without following helper functions.
Also, this way we can "reuse" the nla_put_failure label and assertion. Previously,
if the assertion was hit we would not rewind the buffer but continue
constructing the message (which is already borked). Not that it matters
too much, because this was on an "failed-assertion" code path.
Arguably, the structure is used inside a union with another (larger)
struct, hence no memory is saved.
In fact, it may well be slower performance wise to access a boolean bitfield
than a gboolean (int).
Still, boolean fields in structures should be bool:1 bitfields for
consistency.
Kernel calls the netlink attribute TCA_FQ_CODEL_MEMORY_LIMIT. Likewise,
iproute2 calls this "memory_limit".
Rename because TC parameters are inherrently tied to the kernel
implementation and we should use the familiar name.
iproute2 uses the special value ~0u to indicate not to set
TCA_FQ_CODEL_CE_THRESHOLD in RTM_NEWQDISC. When not explicitly
setting the value, kernel treats the threshold as disabled.
However note that 0xFFFFFFFFu is not an invalid threshold (as far as
kernel is concerned). Thus, we should not use that as value to indicate
that the value is unset. Note that iproute2 uses the special value ~0u
only internally thereby making it impossible to set the threshold to
0xFFFFFFFFu). But kernel does not have this limitation.
Maybe the cleanest way would be to add another field to NMPlatformQDisc:
guint32 ce_threshold;
bool ce_threshold_set:1;
that indicates whether the threshold is enable or not.
But note that kernel does:
static void codel_params_init(struct codel_params *params)
{
...
params->ce_threshold = CODEL_DISABLED_THRESHOLD;
static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
struct netlink_ext_ack *extack)
{
...
if (tb[TCA_FQ_CODEL_CE_THRESHOLD]) {
u64 val = nla_get_u32(tb[TCA_FQ_CODEL_CE_THRESHOLD]);
q->cparams.ce_threshold = (val * NSEC_PER_USEC) >> CODEL_SHIFT;
}
static int fq_codel_dump(struct Qdisc *sch, struct sk_buff *skb)
{
...
if (q->cparams.ce_threshold != CODEL_DISABLED_THRESHOLD &&
nla_put_u32(skb, TCA_FQ_CODEL_CE_THRESHOLD,
codel_time_to_us(q->cparams.ce_threshold)))
goto nla_put_failure;
This means, kernel internally uses the special value 0x83126E97u to indicate
that the threshold is disabled (WTF). That is because
(((guint64) 0x83126E97u) * NSEC_PER_USEC) >> CODEL_SHIFT == CODEL_DISABLED_THRESHOLD
So in kernel API this value is reserved (and has a special meaning
to indicate that the threshold is disabled). So, instead of adding a
ce_threshold_set flag, use the same value that kernel anyway uses.
The memory-limit is an unsigned integer. It is ugly (if not wrong) to compare unsigned
values with "-1". When comparing with the default value we must also use an u32 type.
Instead add a define NM_PLATFORM_FQ_CODEL_MEMORY_LIMIT_UNSET.
Note that like iproute2 we treat NM_PLATFORM_FQ_CODEL_MEMORY_LIMIT_UNSET
to indicate to not set TCA_FQ_CODEL_MEMORY_LIMIT in RTM_NEWQDISC. This
special value is entirely internal to NetworkManager (or iproute2) and
kernel will then choose a default memory limit (of 32MB). So setting
NM_PLATFORM_FQ_CODEL_MEMORY_LIMIT_UNSET means to leave it to kernel to
choose a value (which then chooses 32MB).
See kernel's net/sched/sch_fq_codel.c:
static int fq_codel_init(struct Qdisc *sch, struct nlattr *opt,
struct netlink_ext_ack *extack)
{
...
q->memory_limit = 32 << 20; /* 32 MBytes */
static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
struct netlink_ext_ack *extack)
...
if (tb[TCA_FQ_CODEL_MEMORY_LIMIT])
q->memory_limit = min(1U << 31, nla_get_u32(tb[TCA_FQ_CODEL_MEMORY_LIMIT]));
Note that not having zero as default value is problematic. In fields like
"NMPlatformIP4Route.table_coerced" and "NMPlatformRoutingRule.suppress_prefixlen_inverse"
we avoid this problem by storing a coerced value in the structure so that zero is still
the default. We don't do that here for memory-limit, so the caller must always explicitly
set the value.
When using nm_utils_strbuf_*() API, the buffer gets always moved to the current
end. We must thus remember and return the original start of the buffer.
In practice, there is no difference when representing 0 or 1 as signed/unsigned 32
bit integer. But still use the correct type that also kernel uses.
Also, the implicit conversation from uint32 to bool was correct already.
Still, explicitly convert the uint32 value to boolean in _new_from_nl_qdisc().
It's no change in behavior.
"NM_CMP_FIELD (a, b, fq_codel.ecn == TRUE)" is quite a hack as it relies on
the implementation of the macro in a particular way. The problem is, that
NM_CMP_FIELD() uses typeof() which cannot be used with bitfields. So, the
nicer solution is to use NM_CMP_FIELD_UNSAFE() which exists exactly for bitfields
(it's "unsafe", because it evaluates arguments more than once as it avoids
the temporary variable with typeof()).
Same with nm_hash_update_vals() which uses typeof() to avoid evaluating
arguments more than once. But that again does not work with bitfields.
The "proper" way is to use NM_HASH_COMBINE_BOOLS().
Only read the keyfile databases once and cache them for the remainder of
the program.
- this avoids the overhead of opening the file over and over again.
- it also avoids the data changing without us expecting it. The state
files are internal and we don't support changing it outside of
NetworkManager. So in the base case we read the same data over
and over. In the worst case, we read different data but are not
interested in handling the changes.
- only write the file when the content changes or before exiting
(normally).
- better log what is happening.
- our state files tend to grow as we don't garbage collect old entries.
Keeping this all in memory might be problematic. However, the right
solution for this is that we come up with some form of garbage
collection so that the state files are reaonsably small to begin with.
ethtool/mii API is based on the ifname. As an interface can be renamed,
this API is inherently racy. We would prefer to use the ifindex instead.
The ifindex of a device cannot change (altough it can repeat, which opens a
different race *sigh*).
Anyway, we were already trying to minimize the race be resolving the
name from ifindex immediately before the call to ethtool/mii.
Do better than that. Now resolve the name before and after the call. If
the name changed in the meantime, we have an indication that a race
might have happend (but we cannot be sure).
Note that this can not catch every possible kind of rename race. If you are very
unlucky a swapping of names cannot be detected.
For getters this is relatively straight forward. Just retry when we
have an indication to fall victim to a race (up to a few times). Yes, we
still cannot be 100% sure, but this should be very reliable in practice.
For setters (that modify the device) we also retry. We do so under the
assumption that setting the same options multiple times has no bad effect.
Note that for setters the race of swapping interface names is particularly
bad. If we hit a very unlucky race condition, we might set the setting on
the wrong interface and there is nothing we can do about it. The retry only
ensures that eventually we will set it on the right interface.
Note that this involves one more if_indextoname() call for each operation (in
the common case when there is no renaming race). In cases where we make
multiple ioctl calls, we cache and reuse the information though. So, for such
calls the overhead is even smaller.
When we set the MTU on the link we remember its previous source
(ip-config, parent-device or connection profile) and don't change it
again afterwards to avoid interfering with user's manual changes. The
only exceptions when we change it again are (1) if the parent device
MTU changes and (2) if the new MTU has higher priority than the one
previously set.
To allow a live reapply of the MTU property we also need to clear the
saved source, or the checks described above will prevent setting the
new value.
Fixes: 2f8917237f ('device: rework mtu priority handling')
https://bugzilla.redhat.com/show_bug.cgi?id=1702657
Fix the following assertion failure:
g_object_ref: assertion 'G_IS_OBJECT (object)' failed.
nm_settings_add_connection() can return a NULL connection.
Fixes: f034f17ff6 ('settings: keep the added connection alive for a bit longer')
On Ubuntu 14.04 kernel (4.4.0-146-generic, x86_64) this easily causes
test failures:
make -j 8 src/platform/tests/test-route-linux \
&& while true; do \
NMTST_SEED_RANDOM= ./tools/run-nm-test.sh src/platform/tests/test-route-linux -p /route/rule \
|| break; \
done
outputs:
...
/route/rule/1:
nmtst: initialize nmtst_get_rand() with NMTST_SEED_RAND=22892021
OK
/route/rule/2: >>> failing...
>>> no fuzzy match between: [routing-rule,0x205ab30,1,+alive,+visible; [6] 0: from all suppress_prefixlen 8 none]
>>> and: [routing-rule,0x205c0c0,1,+alive,+visible; [6] 0: from all suppress_prefixlen -1579099242 none]
**
test:ERROR:src/platform/tests/test-route.c:1695:test_rule: code should not be reached
The monitors have been in place since the dispatcher has been introduced.
They need the daemon to do extra work know where the files are supposed to
be. It seems to me the complexity is not worth it.
Let's remove them now, making it easier to modify the dispatcher to look
for scripts in other places.
While the keys of s390-options are from a well-behaving set of names
(that is enforced by nm_connection_verify()), the values are arbitrary
strings.
Our settings plugin must be able to express all values of a connection,
hence we need to support escapes.
- the previous implementation of nm_setting_wired_get_s390_option()
returned the elements in an arbitrary order (because it just iterated
idx times over the unsorted hash table).
- the API for "s390-options" suggests both accessing by index and by
name. Storing the options in a hash-table is not optimal for lookup
by index. It also requires us to sort the elements over and over
again.
Use instead a sorted array. Note that add/remove of course requires to
move the elements (and has thus O(n)).
- "s390-options" are very seldomly set. We shouldn't pay the price in every
NMSettingWired to allocate a GHashTable and deal with it.
- don't assert in nm_setting_wired_add_s390_option() and
nm_setting_wired_remove_s390_option() that the key is valid.
ifcfg-rh reader understandably does not want to implement additional
logic to pre-validate the key, so any invalid keys would trigger an
assertion failure. We have verify() for this purpose.
If a user disables networking, we consider that as an indication that
also software devices must be disconnected. OTOH, we don't want to
destroy them for external events as a system suspend.
When networking is disabled at NM startup we unmanage all devices
(including software ones) due to SLEEPING. After networking gets
enabled again we must clear the unmanaged-sleeping flag on software
devices.
Why didn't we get a compiler warning about this bug?
At least clang (3.8.0-2ubuntu4, Ubuntu 16.04) warns:
CC src/platform/src_libNetworkManagerBase_la-nm-platform.lo
../src/platform/nm-platform.c:5389:14: error: data argument not used by format string [-Werror,-Wformat-extra-args]
lnk->remote ? nm_sprintf_buf (str_remote, " remote %s", nm_utils_inet4_ntop (lnk->remote, str_remote1)) : "",
^
Fixes: 4c2862b958 ('platform: add gretap tunnels support')
(cherry picked from commit dfb899f465)
The library is called "libnm_core". So the dependency should be called
"libnm_core_dep", like in all other cases.
(cherry picked from commit c27ad37c27)
"libnm-core" implements common functionality for "NetworkManager" and
"libnm".
Note that clients like "nmcli" cannot access the internal API provided
by "libnm-core". So, if nmcli wants to do something that is also done by
"libnm-core", , "libnm", or "NetworkManager", the code would have to be
duplicated.
Instead, such code can be in "libnm-libnm-core-{intern|aux}.la".
Note that:
0) "libnm-libnm-core-intern.la" is used by libnm-core itsself.
On the other hand, "libnm-libnm-core-aux.la" is not used by
libnm-core, but provides utilities on top of it.
1) they both extend "libnm-core" with utlities that are not public
API of libnm itself. Maybe part of the code should one day become
public API of libnm. On the other hand, this is code for which
we may not want to commit to a stable interface or which we
don't want to provide as part of the API.
2) "libnm-libnm-core-intern.la" is statically linked by "libnm-core"
and thus directly available to "libnm" and "NetworkManager".
On the other hand, "libnm-libnm-core-aux.la" may be used by "libnm"
and "NetworkManager".
Both libraries may be statically linked by libnm clients (like
nmcli).
3) it must only use glib, libnm-glib-aux.la, and the public API
of libnm-core.
This is important: it must not use "libnm-core/nm-core-internal.h"
nor "libnm-core/nm-utils-private.h" so the static library is usable
by nmcli which couldn't access these.
Note that "shared/nm-meta-setting.c" is an entirely different case,
because it behaves differently depending on whether linking against
"libnm-core" or the client programs. As such, this file must be compiled
twice.
(cherry picked from commit af07ed01c0)
From the files under "shared/nm-utils" we build an internal library
that provides glib-based helper utilities.
Move the files of that basic library to a new subdirectory
"shared/nm-glib-aux" and rename the helper library "libnm-core-base.la"
to "libnm-glib-aux.la".
Reasons:
- the name "utils" is overused in our code-base. Everything's an
"utils". Give this thing a more distinct name.
- there were additional files under "shared/nm-utils", which are not
part of this internal library "libnm-utils-base.la". All the files
that are part of this library should be together in the same
directory, but files that are not, should not be there.
- the new name should better convey what this library is and what is isn't:
it's a set of utilities and helper functions that extend glib with
funcitonality that we commonly need.
There are still some files left under "shared/nm-utils". They have less
a unifying propose to be in their own directory, so I leave them there
for now. But at least they are separate from "shared/nm-glib-aux",
which has a very clear purpose.
(cherry picked from commit 80db06f768)
We built (among others) two libraries from the sources in "shared/nm-utils":
"libnm-utils-base.la" and "libnm-utils-udev.la".
It's confusing. Instead use directories so there is a direct
correspondence between these internal libraries and the source files.
(cherry picked from commit 2973d68253)
"shared/nm-utils" contains general purpose utility functions that only
depend on glib (and extend glib with some helper functions).
We will also add code that does not use glib, hence it would be good
if the part of "shared/nm-utils" that does not depend on glib, could be
used by these future projects.
Also, we use the term "utils" everywhere. While that covers the purpose
and content well, having everything called "nm-something-utils" is not
great. Instead, call this "nm-std-aux", inspired by "c-util/c-stdaux".
(cherry picked from commit b434b9ec07)
For one, use NM_ASCII_SPACES as delimiter when reading
"MATCH_INTERFACE_NAME". Previously, it was only " \t".
I think there is no change in behavior otherwise.
(cherry picked from commit 941f27d350)
Also, in nm_platform_routing_rule_cmp() always compare the routing
table field, also if l3mdev is set. For kernel, we cannot set table and
l3mdev together, hence such rules don't really exist (or if we try to
configure it, it will be rejected by kernel). But as far as
nm_platform_routing_rule_cmp() is concerned, if the table is set,
always compare it.
(cherry picked from commit b6ff02e76f)
For routes and routing rules, kernel uses a certain (not stictly defined) set
of attributes to decide whether to routes/rules are identical.
That is a problem, as different kernel versions disagree on whether
two routes/rules are the same (EEXIST) or not.
Note that when NetworkManager tries to add a rule with protocol set to
anything but RTPROT_UNSPEC, then kernel will ignore the attribute if it
doesn't have support for it. Meaning: the added rule will have a
different protocol setting then intended.
Note that NMPRulesManager will add a rule if it doesn't find it in the
platform cache so far. That means, when looking into the platform cache
we must ignore or honor the protocol like kernel does.
This does not only affect FRA_PROTOCOL, but all attributes where kernel
and NetworkManager disagrees. But the protocol is the most prominent
one, because the rules tracked by nmp_rules_manager_track_default()
specify the protocol.
(cherry picked from commit ef4f8ccf6d)
Next we will need to detect more kernel features. First refactor the
handling of these to require less code changes and be more efficient.
A plain nm_platform_kernel_support_get() only reqiures to access an
array in the common case.
The other important change is that the function no longer requires a
NMPlatform instance. This allows us to check kernel support from
anywhere. The only thing is that we require kernel support to be
initialized before calling this function. That means, an NMPlatform
instance must have detected support before.
(cherry picked from commit ee269b318e)
We don't need GPtrArray to construct an array of fixed side.
Actually, we also don't need to malloc each NMPlatformBridgeVlan
element individually. Just allocate one buffer and append them
to the end.
(cherry picked from commit 6bc8ee87af)
In some cases it is convenient to specify ranges of bridge vlans, as
already supported by iproute2 and natively by kernel. With this commit
it becomes possible to add a range in this way:
nmcli connection modify eth0-slave +bridge-port.vlans "100-200 untagged"
vlan ranges can't be PVIDs because only one PVID vlan can exist.
https://bugzilla.redhat.com/show_bug.cgi?id=1652910
(cherry picked from commit 7093515777)
CC src/settings/plugins/ifcfg-rh/src_settings_plugins_ifcfg_rh_libnms_ifcfg_rh_core_la-nms-ifcfg-rh-reader.lo
In file included from ../shared/nm-default.h:280:0,
from ../src/settings/plugins/ifcfg-rh/nms-ifcfg-rh-reader.c:21:
../src/settings/plugins/ifcfg-rh/nms-ifcfg-rh-reader.c: In function read_routing_rules_parse:
../src/settings/plugins/ifcfg-rh/nms-ifcfg-rh-reader.c:4309:27: error: logical not is only applied to the left hand side of comparison [-Werror=logical-not-parentheses]
nm_assert (!key_is_ipv4 == NM_STR_HAS_PREFIX (key, "ROUTING_RULE6_"));
^
../shared/nm-utils/nm-macros-internal.h:1793:7: note: in definition of macro __NM_G_BOOLEAN_EXPR_IMPL
if (expr) \
^
/usr/include/glib-2.0/glib/gmacros.h:376:43: note: in expansion of macro _G_BOOLEAN_EXPR
#define G_LIKELY(expr) (__builtin_expect (_G_BOOLEAN_EXPR((expr)), 1))
^
/usr/include/glib-2.0/glib/gtestutils.h:116:49: note: in expansion of macro G_LIKELY
if G_LIKELY (expr) ; else \
^
../shared/nm-utils/nm-macros-internal.h:973:40: note: in expansion of macro g_assert
#define nm_assert(cond) G_STMT_START { g_assert (cond); } G_STMT_END
^
../src/settings/plugins/ifcfg-rh/nms-ifcfg-rh-reader.c:4309:3: note: in expansion of macro nm_assert
nm_assert (!key_is_ipv4 == NM_STR_HAS_PREFIX (key, "ROUTING_RULE6_"));
^
Fixes: 4d46804437
(cherry picked from commit c6e6dcae70)
Policy routing rules are global, and unlike routes not tied to an interface by ifindex.
That means, while we take full control over all routes of an interface during a sync,
we need to consider that multiple parties can contribute to the global set of rules.
That might be muliple connection profiles providing the same rule, or rules that are added
externally by the user. NMPRulesManager mediates for that.
This is done by NMPRulesManager "tracking" rules.
Rules that are not tracked by NMPRulesManager are completely ignored (and
considered externally added).
When tracking a rule, the caller provides a track-priority. If multiple
parties track a rule, then the highest (absolute value of the) priority
wins.
If the highest track-priority is positive, NMPRulesManager will add the rule if
it's not present.
When the highest track-priority is negative, then NMPRulesManager will remove the
rule if it's present (enforce its absence).
The complicated part is, when a rule that was previously tracked becomes no
longer tracked. In that case, we need to restore the previous state.
If NetworkManager added the rule earlier, then untracking the rule
NMPRulesManager will remove the rule again (restore its previous absent
state).
By default, if NetworkManager had a negative tracking-priority and removed the
rule earlier (enforced it to be absent), then when the rule becomes no
longer tracked, NetworkManager will not restore the rule.
Consider: the user adds a rule externally, and then activates a profile that
enforces the absence of the rule (causing NetworkManager to remove it).
When deactivating the profile, by default NetworkManager will not
restore such a rule! It's unclear whether that is a good idea, but it's
also unclear why the rule is there and whether NetworkManager should
really restore it.
Add weakly tracked rules to account for that. A tracking-priority of
zero indicates such weakly tracked rules. The only difference between an untracked
rule and a weakly tracked rule is, that when NetworkManager earlier removed the
rule (due to a negative tracking-priority), it *will* restore weakly
tracked rules when the rules becomes no longer (negatively) tracked.
And it attmpts to do that only once.
Likewise, if the rule is weakly tracked and already exists when
NMPRulesManager starts posively tracking the rule, then it would not
remove again, when no longer positively tracking it.