Ubuntu 18.04 comes with iproute2-4.15.0-2ubuntu1.3. The
"/etc/iproute2/rt_protos" file from that version does not yet support
the "bgp" entry. Also the "babel" entry is only from 2014. Just choose
other entries. The point is that NetworkManager would ignore those, and
that applies to "zebra" and "bird" alike.
(cherry picked from commit 26592ebfe5)
This will make sure that the IP tunnel module is loaded. It does so by
creating (and deleting) a tunnel interface.
That is important, because those modules will create additional interfaces
that show up in `ip link` (like "gre0"), and those interfaces can interfere
with the tests.
Also add nmtstp_link_is_iptunnel_special() to detect whether an
interface is one of those special interfaces.
(cherry picked from commit 451cedf2bf)
It seems that `meson test` is preferred over `ninja test`. Also, pass
"--print-errorlogs" to meson, and pass "-v" to the build steps.
Note that `ninja test` already ends up calling `meson test
--print-errorlogs`, but it doesn't use "-v", so the logs are truncated.
(cherry picked from commit dba2fb5fff)
Like done for autotools. First we run the test without debugging option.
If it fails, we run it again to possibly trigger the failure again and
get better logs.
(cherry picked from commit 13d9cf75ed)
"debug" is implied when setting NMTST_DEBUG, but not specifying
"no-debug". This change has thus no effect, but it seems clearer to be
explicit.
The "debug" flag affects nmtst_is_debug(). Note that tests *must* not
result in different code paths based on debug, they may only
1) print more debug logging
2) do more assertion checks.
Having more assertion checks can result in different outcome of the
test, that is, that the additional assertion fails first. That is
acceptable, because failing earlier is possibly closer to the issue and
helps debugging. Also, when the additional failure is fixed and passes,
we still will fail at the assertion we are trying to debug.
In particular, an access to nmtst_get_rand*()/nmtst_rand*() must not
depend on nmtst_is_debug(), because then different randomized paths
are taken based on whether debugging is enabled.
(cherry picked from commit 3f2ad76363)
The code was just wrong. Usually in gitlab-ci, NMTST_SEED_RANDOM is
unset, so the previous code would not have set it. Which means that our
tests run with NMTST_SEED_RANDOM="0".
Fuzzing (or randomizing tests) is very useful, we should do that for the
unit tests that run in gitlab-ci. Fix this.
But don't let the test choose a random number. Instead, let the calling
script choose it. That is, because we might run the tests more than once
(without debugging and no valgrind; in case of failure return with
debugging; with valgrind). Those runs should use the same seed.
This fixes commit 70487d9ff8 ('ci: randomize tests during our CI'),
but as fixing randomization can break previously running tests, we may
only want to backport this commit after careful evaluation.
(cherry picked from commit 3bad3f8b24)
Seems this test fails easily under gitlab-ci, if we set NMTST_SEED_RAND
to something else than "0". There is nothing particular special about
"0", except that a randomly different code paths are chosen.
A randomized test that doesn't pass on all systems with all random
paths, is broken. Disable for now. Needs to be fixed.
See-also: https://bugzilla.redhat.com/show_bug.cgi?id=2165141
(cherry picked from commit 14b1a7ba30)
Obviously, it would be nice if our unit tests are fast. However, with
valgrind and a busy machine, some of the tests can take a relatively
long time. In particular those, that are marked as "slow" (if you want
to skip them during development, do so via "NMTST_DEBUG=quick"
environment, or "CFLAGS=-DNMTST_TEST_QUICK=TRUE", see
"nm-test-utils.h").
Anyway. Our tests almost never hit the timeout, and if they do, the most
likely reason is that something was just slower then expected, and the
timeout is a bogus error.
Timeouts only act as last fail safe. It more important to avoid a false
(premature) timeout failure, than to minimize the wait time when the
test really hangs. Because a real hang is a bug anyway, that we will
discover and need to fix.
Increase the default test timeout for meson tests to 3 minutes.
Also, "test-route-linux" is known to take a long time. Increase that
timeout even further.
(cherry picked from commit 9ee42c0979)
Currently, the use of [global-dns] section for setting DNS options is
conditioned on presence of a nameserver in a [global-dns-domain-*] section.
Attempt to use the section for options alone results in an error:
[global-dns]
options=timeout:1
Or via D-Bus API:
# busctl set-property org.freedesktop.NetworkManager \
/org/freedesktop/NetworkManager org.freedesktop.NetworkManager \
GlobalDnsConfiguration 'a{sv}' 2 \
"options" as 1 "timeout:1" \
"domains" a{sv} 0
...
Nov 24 13:15:21 zmok.local NetworkManager[501184]: <debug> [1669292121.3904]
manager: set global DNS failed with error: Global
DNS configuration is missing the default domain
The insistence on existence of [global-dns-domain-*] would make sense if
other [global-dns-domain-...] sections were present.
However, the user might only want to set the options in resolv.conf and
still use connection-provide nameservers for the actual resolving.
Lift the limitation by allowing the [global-dns] to be used alone, while
still insist on [global-dns-domain-*] being there in presence of other
domain-specific options.
https://bugzilla.redhat.com/show_bug.cgi?id=2019306
(cherry picked from commit 1f0d1d78d2)
This effectively makes [*global-dns-domain-*] sections in configuration be
ignored unless [*global-dns] is also present. This happens because
nm_config_keyfile_has_global_dns_config() mixes the group names up and
attempts to loop up [.intern.global-dns-domain-*] in user configuration and
[global-dns-domain-*] in the internal one.
Fixes: da0ded4927 ('config: drop global-dns.enable option in favor of .config.enable')
(cherry picked from commit de1c06daab)
It is possible that an ra leads to two routes having
the same prefix as well as the same prefix length.
One of them, however, refers to the on-link prefix,
and the other one to a route from the route information field.
(Moreover, they might have different route preferences.)
Hence, if both routes differ in the on-link property,
both are added, and the route from the route information
option receives a metric penalty.
Fixed#1163.
(cherry picked from commit 11832e2ba3)
It's not clear why this happens. But since recently in our gitlab-ci,
all the Fedora machines will fail. It happens in the step
check_run_clean 6 && test $IS_FEDORA = 1 -o $IS_CENTOS = 1 && ./contrib/fedora/rpm/build_clean.sh -g -w crypto_gnutls -w debug -w iwd -w test -W meson
which explains why it only affects Fedora configurations.
It does not always fail, but the probability of failure is high.
The failure is:
...
rm -f et.gmo && /usr/bin/msgmerge --for-msgfmt -o et.1po et.po NetworkManager.pot && /usr/bin/msgfmt -c --statistics --verbose -o et.gmo et.1po && rm -f et.1po
libgomp: Thread creation failed: Resource temporarily unavailable
make[3]: *** [Makefile:383: et.gmo] Error 1
Maybe some new resource restricting in gitlab. Let's add this workaround.
I don't really understand the cause, but this seems to avoid it, which is
good enough for me.
When we run `NM_TEST_SELECT_RUN=x ./.gitlab-ci/run-test.sh` to run one
step only, we should not do the final clean, so that the build artifacts
are preserved.
When we register/unregister a commit-type or when we add/remove a
config-data to NML3Cfg, that act only does the registration/addition.
Only on the next commit, are the changes actually done. The purpose
of that is to add/register multiple configurations and commit them later
when ready.
However, it would be wrong to not do the commit a short time after. The
configuration state is dirty with need to be committed, and that should
happen soon.
Worse, when a interface disappears, NMDevice will clear the ifindex and
the NML3Cfg instance, thereby unregistering all config data and commit
type. If we previously commited something, we need to do another follow-up
commit to cleanup that state.
That is for example important with ECMP routes, which are registered in
NMNetns. When NML3Cfg goes down, it always must unregister to properly
cleanup. Failure to do so, causes an assertion failure and crash. This
change fixes that.
Fix that by automatically schedule and idle commit on
register/unregister/add/remove of commit-type/config-data.
It should *always* be permissible to call a AUTO commit from
an idle handler, because various parties cannot use NML3Cfg
independently, and they cannot know when somebody else does a
commit.
Note that NML3Cfg remembers if it presiouvly did a commit
("commit_type_update_sticky"), so even if the last commit-type gets
unregistered, the next commit will still do a sticky update (one more
time).
The only remaining question is what happens during quitting. When
quitting, NetworkManager we may want to leave some interfaces up and
configured. If we were to properly cleanup the NML3Cfg we might need a
mechanism to handle that. However, currently we just leak everything
during quit, so that is not a concern now. It is something that needs
to be addressed in the future.
https://bugzilla.redhat.com/show_bug.cgi?id=2158394https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1505
Routes can be added with `ip route add|change|replace|append|prepend`.
Add a test that randomly tries to add such routes, and checks that
the cache stays consistent.
https://bugzilla.redhat.com/show_bug.cgi?id=2060684
nmtstp_env1_add_test_func() prepares a certain environment (with dummy
interface) that is used by some tests. Extend it, to allow creating more
than one interface (currently up to two).
The major point of NMDedupMultiIndex is that it can de-duplicate
the objects. It thus makes sense the everybody is using the same
instance. Make the multi-idx instance of NMPlatform configurable.
This is not used outside of unit tests, because the daemon currently
always creates one platform instance and everybody then re-uses the
instance of the platform.
While this is (currently) only used by tests, and that the performance
optimization of de-duplicating is irrelevant for tests, this is still
useful. The test can then check whether two separate NMPlatform objects
shared the same instance and whether it was de-duplicated.
There really is no way around this. As we don't cache all the routes
(e.g. ignored based on rtm_protocol or rtm_type), we cannot know which
route was replaced, when we get a NLM_F_REPLACE message.
We need to request a new dump in that case, which can be expensive, if
there are a lot of routes or if replace happens frequently.
The only possible solutions would be:
1) NetworkManager caches all routes, but it also needs to make sure to
get *everything* right. In particular, to understand every relevant
route attribute (including those added in the future, which is
impossible).
2) kernel provides a reasonable API (rhbz#1337855, rhbz#1337860) that
allows to sufficiently understand what is going on based on the
netlink notifications.
When you issue
ip route replace broadcast 1.2.3.4/32 dev eth0
then this route may well replace a (unicast) route that we have in
the cache.
Previously, we would right away ignore such messages in
_new_from_nl_route(), which means we miss the fact that a route gets
replaced.
Instead, we need to parse the message at least so far, that we can
detect and handle the replace.
We don't cache certain routes, for example based on the protocol. This is
a performance optimization to ignore routes that we usually don't care
about.
Still, if the user does `ip route replace` with such a route, then we
need to pass it to nmp_cache_update_netlink_route(), so that we can
properly remove the replaced route.
Knowing which route was replaces might be impossible, as our cache does
not contain all routes. Likely all that nmp_cache_update_netlink_route()
can to is to set "resync_required" for NLM_F_REPLACE. But for that it
should see the object first.
This also means, if we ever write a BPF filter to filter out messages
that contain NLM_F_REPLACE, because that would lead to cache inconsistencies.
The route table is part of the weak-id. You can see that with:
ip route replace unicast 1.2.3.4/32 dev eth0 table 57
ip route replace unicast 1.2.3.4/32 dev eth0 table 58
afterwards, `ip route show table all` will list both routes. The replace
operation is only per-table. Note that NMP_CACHE_ID_TYPE_ROUTES_BY_WEAK_ID
already got this right.
Fixes: 10ac675299 ('platform: add support for routing tables to platform cache')
By setting "NMTST_DEBUG" to any non-empty string, "is_debug" is enabled.
So "NMTST_DEBUG='debug,...'" is mostly redundant. Note however you can
still disable debug mode explicitly, like "NMTST_DEBUG=no-debug,...",
which can make sense if you want to set other flags but not enabling
debug mode (like "NMTST_DEBUG=no-debug,quick").
You can also explicitly set the log level ("NMTST_DEBUG='log-level=TRACE,...'"
or enable trace debugging with "NMTST_DEBUG='d,...'", where "d" (or "D")
is shorthand for "NMTST_DEBUG=log-level=TRACE,no-expect-message,...".
Anyway. If you explicitly set the log level with "log-level=" or "d",
debug logging is enabled with the "debug" flag. But it only logged at
level "<debug>". That seems not best. Instead, enable "<trace>" level by
default in debug mode.
That's useful, because there is not a clear distinction between
"<debug>" and "<trace>" level. When debugging, you really want all the
information you got, you can also filter out later (`grep` is a thing).
There is g_ptr_array_copy() in glib, but only since 2.68 so we cannot use it.
We had a compat implementation nm_g_ptr_array_copy(), however that one always
requires an additional parameter, the free function of the new array.
g_ptr_array_copy() always does a deep clone, and uses the source array's
free function. We don't have access to the free function (seems quite a
limitation of GPtrArray API), so our nm_g_ptr_array_copy() cannot be
exactly the same.
Previously, nm_g_ptr_array_copy() aimed to be as similar as possible to
g_ptr_array_copy(), and it would require the caller that the free
function is the same as the array's. That seems an unnecessary
limitation, and our compat implementation still looks different and has
a different name. If we were able to fully re-implement it, we would
instead add it to "nm-glib.h".
Anyway. As our implementation already differs, there is no need for the
arbitrary limitation to only perform deep copies. Instead, also allow
shallow copies. Rename the function to nm_g_ptr_array_new_clone() to
make it clearly distinct from g_ptr_array_copy().
CURLOPT_PROTOCOLS [0] was deprecated in libcurl 7.85.0 with
CURLOPT_PROTOCOLS_STR [1] as a replacement.
Well, technically it was only deprecated in 7.87.0, and retroactively
marked as deprecated since 7.85.0 [2]. But CURLOPT_PROTOCOLS_STR exists
since 7.85.0, so that's what we want to use.
This causes compiler warnings and build errors:
../src/core/nm-connectivity.c: In function 'do_curl_request':
../src/core/nm-connectivity.c:770:5: error: 'CURLOPT_PROTOCOLS' is deprecated: since 7.85.0. Use CURLOPT_PROTOCOLS_STR [-Werror=deprecated-declarations]
770 | curl_easy_setopt(ehandle, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS);
| ^~~~~~~~~~~~~~~~
In file included from ../src/core/nm-connectivity.c:13:
/usr/include/curl/curl.h:1749:3: note: declared here
1749 | CURLOPTDEPRECATED(CURLOPT_PROTOCOLS, CURLOPTTYPE_LONG, 181,
| ^~~~~~~~~~~~~~~~~
This patch is largely taken from systemd patch [2].
Based-on-patch-by: Frantisek Sumsal <frantisek@sumsal.cz>
[0] https://curl.se/libcurl/c/CURLOPT_PROTOCOLS.html
[1] https://curl.se/libcurl/c/CURLOPT_PROTOCOLS_STR.html
[2] 6967571bf2
[3] e61a4c0b7c
Fixes: 7a1734926a ('connectivity,cloud-setup: restrict curl protocols to HTTP and HTTPS')