LLMNR and mDNS settings can have their global default value configured
in "NetworkManager.conf".
Global default values should work the way that all regular values of the property
can be configured explicitly in the connection profile. The special "default" value
only indicates to allow lookup of the global default, but it should not have a
meaning of its own.
Note that if mDNS/LLMNR settings are left unspecified, we will set the
argument to SetLinkMulticastDNS() and SetLinkLLMNR() functions to "",
which means that systemd-resolved decides on a default. Also, depending
on the DNS plugin, the default value differs. This is all fine however.
In this case, the ultimate default value depends on other things (like
the DNS plugin), but each possible value is in fact explicitly
configurable. We also do that for "ipv6.ip6-privacy".
Anyway, cleanup the documentation a bit and try to better explain what
the default is.
This avoids unnecessarily fetching permissions, which are not needed
most of the time.
During `nmcli general permissions` we require to fetch the permissions. This is
now solved better, because previously the code waited for any permissions to be
not UNKNOWN. That was a hack, because there are cases where all permissions would
be UNKNOWN (hidepid mount option) and nmcli would hang.
There is a downside too: for `nmcli general permissions` we now first
need to wait for NMClient to initialize, before starting to fetch
permissions. Previously, we would call GetPermissions() in parallel
with initializing NMClient. It now takes longer.
That should be fixed be refactoring the code in nmcli to not wait for
NMClient to be fully initialized, before requesting the permissions.
Using sync init (nm_client_new()) has an overhead as it requires an internal
GMainContext to ensure preserving the order of D-Bus messages. Let's avoid
that by using the async init. Note that the difference here is that we will
iterate the caller's GMainContext while creating the instance. But that
is no problem for nmtui at that point.
For example with
mount -o remount,rw,hidepid=1 /proc/
all permission checks will fail with an error. Internally, we map the
failure to NM_AUTH_CALL_RESULT_UNKNOWN.
<trace> [1575645672.5958] auth: call[1069]: CheckAuthorization(org.freedesktop.NetworkManager.enable-disable-connectivity-check), subject=unix-process[pid=468316, uid=1000, start=1912881]
<trace> [1575645672.6295] auth: call[1069]: completed: failed: GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._g_2dfile_2derror_2dquark.Code4: Failed to open file “/proc/468316/status”: No such file or directory
<debug> [1575645672.6296] manager: unknown auth chain result 0
First of all, we should not log a debug message about that (we already log the
result of permission checks separately).
Also, we should include the unknown result in the response. The permission was
checked, and omitting it from GetPermissions() result seems wrong (even if we
failed to get the result).
Note that "unknown" is now a new possible return value on D-Bus. But
see how nm_permission_result_to_client() would map such a value to
"unknown" as well. So, it's probably a fine extension of the D-Bus API.
Note that NMClient API is currently quite limited. The user won't know
whether permissions were received (and if they were received, they
could not distinguish between UNKNOWN and absent). Hence, returning
all permissions as unknown (or not at all) causes `nmcli general permissions`
to hang. The solution here is to improve NMClient API to allow the user
to know when the permissions are received. But this patch doesn't
fix the hanging of nmcli nor the limitation of NMClient's API.
"nm-cloud-setup" can by configured via environment variables. Mark all the
names of such variables with NMCS_ENV_VARIABLE() macro. This allows to grep
for them.
"nm-cloud-setup" is supposed to work without configuration.
However, it (obviously) fetches data from the network you are connected to (which
might be untrusted or controlled by somebody malicious). The tool cannot
protect you against that, also because the meta data services uses HTTP and not
HTTPS. It means, you should run the tool only when it's suitable for your
environment, that is: in the right cloud.
Usually, the user/admin/distributor would know for which cloud the enable the tool.
It's also wasteful to repeatedly probe for the unavailable cloud.
So, instead disable all providers by default and require to opt-in by setting an
environment variable.
This can be conveniently done via `systemctl edit nm-cloud-provider.service` to
set Environment=. Of course, a image can also pre-deploy such am override file.
We don't want that when the user installs the package, that the
dispatcher script automatically executes the tool. Instead, the user
should use `systemctl enable/disable` to control whether the service
is active (of via the timer).
Hence, let the dispatcher script check whether the service is enabled.
That leads to a different problem, that we need to make it possible for
"nm-cloud-setup.service" to be enabled in the first place. As such, add
a [Install] section and let it be wanted by NetworkManager.service. The
problem with this is that now the tool will run very early, just after
NetworkManager started. At that point, it might not yet have setup
networking. But that should be acceptable, after all, the tool either
fails to fetch meta data that early, or it succeeds. Very likely, it
will by aborted by dispatcher's restart command.
Add a new 'carrier' flag to the InterfaceFlags property of devices to
indicate the current carrier state.
The new flag is equivalent to the 'lower-up' flag for all devices
except the ones that use a non-standard carrier detection mechanism
like NMDeviceAdsl.
We need to actually read the stdout/stderr of the nmcli programs.
Otherwise, the pipe might fill uup and block to process (eventually
leading to timeout).
Debugging tests that are called by test-client.py is cumbersome.
One way would be to set NM_TEST_CLIENT_NMCLI_PATH to a wrapper script.
However, then we want to know from the wrapper script which test
we are currently calling. Add that to the environment.
During the libnm rework, we might still emit permissions changed
signal while destructing the instance. That triggers an assertion.
Backtrace, with a different libnm:
#0 _g_log_abort (breakpoint=1) at ../glib/gmessages.c:554
#1 0x00007ffff77d09b6 in g_logv (log_domain=0x7ffff7f511cd "libnm", log_level=G_LOG_LEVEL_CRITICAL, format=<optimized out>, args=args@entry=0x7fffffffcb80) at ../glib/gmessages.c:1373
#2 0x00007ffff77d0b83 in g_log
(log_domain=log_domain@entry=0x7ffff7f511cd "libnm", log_level=log_level@entry=G_LOG_LEVEL_CRITICAL, format=format@entry=0x7ffff78215df "%s: assertion '%s' failed")
at ../glib/gmessages.c:1415
#3 0x00007ffff77d137d in g_return_if_fail_warning
(log_domain=log_domain@entry=0x7ffff7f511cd "libnm", pretty_function=pretty_function@entry=0x7ffff7f58aa0 <__func__.40223> "nm_client_get_permission_result", expression=expression@entry=0x7ffff7f54830 "NM_IS_CLIENT (client)") at ../glib/gmessages.c:2771
#4 0x00007ffff7e9de9a in nm_client_get_permission_result (client=0x0, permission=permission@entry=NM_CLIENT_PERMISSION_ENABLE_DISABLE_NETWORK) at libnm/nm-client.c:3816
#5 0x0000555555593ba3 in got_permissions (nmc=nmc@entry=0x55555562ec20 <nm_cli>) at clients/cli/general.c:587
#6 0x0000555555593bcb in permission_changed (client=<optimized out>, permission=<optimized out>, result=<optimized out>, nmc=0x55555562ec20 <nm_cli>) at clients/cli/general.c:600
#7 0x00007ffff73b1aa8 in ffi_call_unix64 () at ../src/x86/unix64.S:76
#8 0x00007ffff73b12a4 in ffi_call (cif=cif@entry=0x7fffffffced0, fn=fn@entry=0x555555593bbf <permission_changed>, rvalue=<optimized out>, avalue=avalue@entry=0x7fffffffcde0)
at ../src/x86/ffi64.c:525
#9 0x00007ffff78b4746 in g_cclosure_marshal_generic_va
(closure=<optimized out>, return_value=<optimized out>, instance=<optimized out>, args_list=<optimized out>, marshal_data=<optimized out>, n_params=<optimized out>, param_types=<optimized out>) at ../gobject/gclosure.c:1614
#10 0x00007ffff78b3996 in _g_closure_invoke_va (closure=0x5555556f4330, return_value=0x0, instance=0x55555565a020, args=0x7fffffffd180, n_params=2, param_types=0x555555656f00)
at ../gobject/gclosure.c:873
#11 0x00007ffff78d0228 in g_signal_emit_valist (instance=0x55555565a020, signal_id=<optimized out>, detail=0, var_args=var_args@entry=0x7fffffffd180) at ../gobject/gsignal.c:3306
#12 0x00007ffff78d09d3 in g_signal_emit (instance=instance@entry=0x55555565a020, signal_id=<optimized out>, detail=detail@entry=0) at ../gobject/gsignal.c:3453
#13 0x00007ffff7e8989a in _emit_permissions_changed (self=self@entry=0x55555565a020, permissions=permissions@entry=0x555555690e40 = {...}, force_unknown=force_unknown@entry=1)
at libnm/nm-client.c:2874
#14 0x00007ffff7e9a0c9 in _init_release_all (self=self@entry=0x55555565a020) at libnm/nm-client.c:6092
#15 0x00007ffff7e9bcde in dispose (object=0x55555565a020 [NMClient]) at libnm/nm-client.c:6838
#16 0x00007ffff78b8c28 in g_object_unref (_object=<optimized out>) at ../gobject/gobject.c:3344
#17 g_object_unref (_object=0x55555565a020) at ../gobject/gobject.c:3274
#18 0x00005555555badcf in nmc_cleanup (nmc=0x55555562ec20 <nm_cli>) at clients/cli/nmcli.c:924
#19 0x00005555555bbea7 in main (argc=<optimized out>, argv=0x7fffffffd498) at clients/cli/nmcli.c:987
Otherwise repeated "nmcli d wifi hotspot" commands create multiple
Hostpot connections, which is just sad. We do already reuse existing
connections with "nmcli d wifi connect" -- let's just do a similar thing
here.
A quick overview of the currently connected Wi-Fi network, including
credentials. Comes handy if someone wants to connect more devices to
their Hotspot or the same network as they are connected to.
In a future commit it will be useful to know whether the activation
details when the activation succeeds.
This also makes the state tracking of the ongoing activation more
elegant, since we got our device and AC neatly packed together and we
can treat their respective state changes consistently.
Public API should validate input arguments with g_return_*().
Tag the task with the source function (using nm_g_task_new())
and check it in the corresponding _finish() function.
The default timeout is really only here to abort the test when it
definitely failed to avoid keeping it hanging forever. It's not part
of any strict assertions and should be large.
Fixes: bb4b749595 ('clients/tests: don't wait for first job before scheduling parallel jobs')
Previously, the test would kick off 15 processes in parallel, but
the first job in the queue would block more processes from being
started.
That is, async_start() would only start 15 processes, but since none of
them were reaped before async_wait() was called, no more than 15 jobs
were running during the start phase. That is not a real issue, because
the start phase is non-blocking and queues all the jobs quickly. It's
not really expected that during that time many processes already completed.
Anyway, this was a bit ugly.
The bigger problem is that async_wait() would always block for the
first job to complete, before starting more processes. That means,
if the first job in the queue takes unusually long, then this blocks
other processes from getting reaped and new processes from being
started.
Instead, don't block only one one jobs, but poll them in turn for a
short amount of time. Whichever process exits first will be completed
and more jobs will be started.
In fact, in the current setup it's hard to notice any difference,
because all nmcli invocations take about the same time and are
relatively fast. That this approach parallelizes better can be seen
when the runtime of jobs varies stronger (and some invocations take
a notably longer time). As we later want to run nmcli under valgrind,
this probably will make a difference.
An alternative would be not to poll()/wait() for child processes,
but somehow get notified. For example, we could use a GMainContext
and watch child processes. But that's probably more complicated
to do, so let's keep the naive approach with polling.