"nm-cloud-setup" can by configured via environment variables. Mark all the
names of such variables with NMCS_ENV_VARIABLE() macro. This allows to grep
for them.
(cherry picked from commit 7b24d6e2dc)
"nm-cloud-setup" is supposed to work without configuration.
However, it (obviously) fetches data from the network you are connected to (which
might be untrusted or controlled by somebody malicious). The tool cannot
protect you against that, also because the meta data services uses HTTP and not
HTTPS. It means, you should run the tool only when it's suitable for your
environment, that is: in the right cloud.
Usually, the user/admin/distributor would know for which cloud the enable the tool.
It's also wasteful to repeatedly probe for the unavailable cloud.
So, instead disable all providers by default and require to opt-in by setting an
environment variable.
This can be conveniently done via `systemctl edit nm-cloud-provider.service` to
set Environment=. Of course, a image can also pre-deploy such am override file.
(cherry picked from commit ff816dec17)
We don't want that when the user installs the package, that the
dispatcher script automatically executes the tool. Instead, the user
should use `systemctl enable/disable` to control whether the service
is active (of via the timer).
Hence, let the dispatcher script check whether the service is enabled.
That leads to a different problem, that we need to make it possible for
"nm-cloud-setup.service" to be enabled in the first place. As such, add
a [Install] section and let it be wanted by NetworkManager.service. The
problem with this is that now the tool will run very early, just after
NetworkManager started. At that point, it might not yet have setup
networking. But that should be acceptable, after all, the tool either
fails to fetch meta data that early, or it succeeds. Very likely, it
will by aborted by dispatcher's restart command.
(cherry picked from commit 953e01336a)
Add a new 'carrier' flag to the InterfaceFlags property of devices to
indicate the current carrier state.
The new flag is equivalent to the 'lower-up' flag for all devices
except the ones that use a non-standard carrier detection mechanism
like NMDeviceAdsl.
We need to actually read the stdout/stderr of the nmcli programs.
Otherwise, the pipe might fill uup and block to process (eventually
leading to timeout).
Debugging tests that are called by test-client.py is cumbersome.
One way would be to set NM_TEST_CLIENT_NMCLI_PATH to a wrapper script.
However, then we want to know from the wrapper script which test
we are currently calling. Add that to the environment.
During the libnm rework, we might still emit permissions changed
signal while destructing the instance. That triggers an assertion.
Backtrace, with a different libnm:
#0 _g_log_abort (breakpoint=1) at ../glib/gmessages.c:554
#1 0x00007ffff77d09b6 in g_logv (log_domain=0x7ffff7f511cd "libnm", log_level=G_LOG_LEVEL_CRITICAL, format=<optimized out>, args=args@entry=0x7fffffffcb80) at ../glib/gmessages.c:1373
#2 0x00007ffff77d0b83 in g_log
(log_domain=log_domain@entry=0x7ffff7f511cd "libnm", log_level=log_level@entry=G_LOG_LEVEL_CRITICAL, format=format@entry=0x7ffff78215df "%s: assertion '%s' failed")
at ../glib/gmessages.c:1415
#3 0x00007ffff77d137d in g_return_if_fail_warning
(log_domain=log_domain@entry=0x7ffff7f511cd "libnm", pretty_function=pretty_function@entry=0x7ffff7f58aa0 <__func__.40223> "nm_client_get_permission_result", expression=expression@entry=0x7ffff7f54830 "NM_IS_CLIENT (client)") at ../glib/gmessages.c:2771
#4 0x00007ffff7e9de9a in nm_client_get_permission_result (client=0x0, permission=permission@entry=NM_CLIENT_PERMISSION_ENABLE_DISABLE_NETWORK) at libnm/nm-client.c:3816
#5 0x0000555555593ba3 in got_permissions (nmc=nmc@entry=0x55555562ec20 <nm_cli>) at clients/cli/general.c:587
#6 0x0000555555593bcb in permission_changed (client=<optimized out>, permission=<optimized out>, result=<optimized out>, nmc=0x55555562ec20 <nm_cli>) at clients/cli/general.c:600
#7 0x00007ffff73b1aa8 in ffi_call_unix64 () at ../src/x86/unix64.S:76
#8 0x00007ffff73b12a4 in ffi_call (cif=cif@entry=0x7fffffffced0, fn=fn@entry=0x555555593bbf <permission_changed>, rvalue=<optimized out>, avalue=avalue@entry=0x7fffffffcde0)
at ../src/x86/ffi64.c:525
#9 0x00007ffff78b4746 in g_cclosure_marshal_generic_va
(closure=<optimized out>, return_value=<optimized out>, instance=<optimized out>, args_list=<optimized out>, marshal_data=<optimized out>, n_params=<optimized out>, param_types=<optimized out>) at ../gobject/gclosure.c:1614
#10 0x00007ffff78b3996 in _g_closure_invoke_va (closure=0x5555556f4330, return_value=0x0, instance=0x55555565a020, args=0x7fffffffd180, n_params=2, param_types=0x555555656f00)
at ../gobject/gclosure.c:873
#11 0x00007ffff78d0228 in g_signal_emit_valist (instance=0x55555565a020, signal_id=<optimized out>, detail=0, var_args=var_args@entry=0x7fffffffd180) at ../gobject/gsignal.c:3306
#12 0x00007ffff78d09d3 in g_signal_emit (instance=instance@entry=0x55555565a020, signal_id=<optimized out>, detail=detail@entry=0) at ../gobject/gsignal.c:3453
#13 0x00007ffff7e8989a in _emit_permissions_changed (self=self@entry=0x55555565a020, permissions=permissions@entry=0x555555690e40 = {...}, force_unknown=force_unknown@entry=1)
at libnm/nm-client.c:2874
#14 0x00007ffff7e9a0c9 in _init_release_all (self=self@entry=0x55555565a020) at libnm/nm-client.c:6092
#15 0x00007ffff7e9bcde in dispose (object=0x55555565a020 [NMClient]) at libnm/nm-client.c:6838
#16 0x00007ffff78b8c28 in g_object_unref (_object=<optimized out>) at ../gobject/gobject.c:3344
#17 g_object_unref (_object=0x55555565a020) at ../gobject/gobject.c:3274
#18 0x00005555555badcf in nmc_cleanup (nmc=0x55555562ec20 <nm_cli>) at clients/cli/nmcli.c:924
#19 0x00005555555bbea7 in main (argc=<optimized out>, argv=0x7fffffffd498) at clients/cli/nmcli.c:987
Otherwise repeated "nmcli d wifi hotspot" commands create multiple
Hostpot connections, which is just sad. We do already reuse existing
connections with "nmcli d wifi connect" -- let's just do a similar thing
here.
A quick overview of the currently connected Wi-Fi network, including
credentials. Comes handy if someone wants to connect more devices to
their Hotspot or the same network as they are connected to.
In a future commit it will be useful to know whether the activation
details when the activation succeeds.
This also makes the state tracking of the ongoing activation more
elegant, since we got our device and AC neatly packed together and we
can treat their respective state changes consistently.
Public API should validate input arguments with g_return_*().
Tag the task with the source function (using nm_g_task_new())
and check it in the corresponding _finish() function.
The default timeout is really only here to abort the test when it
definitely failed to avoid keeping it hanging forever. It's not part
of any strict assertions and should be large.
Fixes: bb4b749595 ('clients/tests: don't wait for first job before scheduling parallel jobs')
Previously, the test would kick off 15 processes in parallel, but
the first job in the queue would block more processes from being
started.
That is, async_start() would only start 15 processes, but since none of
them were reaped before async_wait() was called, no more than 15 jobs
were running during the start phase. That is not a real issue, because
the start phase is non-blocking and queues all the jobs quickly. It's
not really expected that during that time many processes already completed.
Anyway, this was a bit ugly.
The bigger problem is that async_wait() would always block for the
first job to complete, before starting more processes. That means,
if the first job in the queue takes unusually long, then this blocks
other processes from getting reaped and new processes from being
started.
Instead, don't block only one one jobs, but poll them in turn for a
short amount of time. Whichever process exits first will be completed
and more jobs will be started.
In fact, in the current setup it's hard to notice any difference,
because all nmcli invocations take about the same time and are
relatively fast. That this approach parallelizes better can be seen
when the runtime of jobs varies stronger (and some invocations take
a notably longer time). As we later want to run nmcli under valgrind,
this probably will make a difference.
An alternative would be not to poll()/wait() for child processes,
but somehow get notified. For example, we could use a GMainContext
and watch child processes. But that's probably more complicated
to do, so let's keep the naive approach with polling.
If a new command was requested while a client was in the process of being
created we were just requesting a new client.
This was causing leak, so let's strongly ensure this is not the case.
If we failed on process wait, we didn't close the pipes and no clear output
of what happened was exposed.
So use a finally stanza to close the pipes and print stdout and stderr
on failure.
The dependencies used in the build of the `nmtui` executable and the
`libnmt-newt` library have been reviewed.
The compiler flags used in common by them has also been moved to a
`common_c_flags` variable to avoid any confussion.
The dependencies used in the build of `nmcli` has been reviewed and
removed the unnecessary ones. The used compiler flags has also been
moved to one line.
The build file in the `client` `common` directory has been improved
by grouping the objects used in properties and by reviewing the
dependencies used by tests built. Finally the indentation has also
been fixed.
The build file in the `client` `common` directory has been improved
by grouping the objects used in properties and by reviewing the
dependencies used by libraries built in the file.
The variable holding the compiler flags, `cflags`, has been renamed
to `c_flags` to be consistent with the rest of build files.
Different objects used in the `test-dispatcher-envp` target
have been grouped together.
The dependency over the `libnm` library has been removed as it is
unnecessary.
The targets that involve the use of the `libnm` library have been
improved by applying a set of changes:
- Generated enum sources variable `libnm_enum` has been renamed to
`libnm_enum_sources` to clearly specify what it is holding.
- Indentation in the `libnm` build and test files has been fixed.
- Set of objects used in targets have been grouped together.
The `libnm-core` build file has been improved by applying a set of
changes:
- Indentation has been fixed to be consistent.
- Library variable names have been changed to `lib{name}` pattern
following their filename pattern.
- `shared` prefix has been removed from all variables using it.
- Dependencies have been reviewed to store the necessary data.
- The use of the libraries and dependencies created in this file
has been reviewed through the entire source code. This has
required the addition or the removal of different libraries and
dependencies in different targets.
- Some files used directly with the `files` function have been moved
to their nearest path build file because meson stores their full
path seamessly and they can be used anywhere later.
The `nm-default.h` header is used widely in the code by many
targets. This header includes different headers and needs different
libraries depending the compilation flags.
A new set of `*nm_default_dep` dependencies have been created to
ease the inclusion of different directorires and libraries.
This allows cleaner build files and avoiding linking unnecessary
libraries so this has been applied allowing the removal of some
dependencies involving the linking of unnecessary libraries.
The `shared` build file has been improved by applying a set of
changes:
- Indentation has been fixed to be consistent.
- Unused libraries and dependencies have been removed.
- Dependencies have been reviewed to store the necessary data.
- Set of objects used in targets have been grouped together.
- Header files have been removed from sources lists as it's
unnecessary.
- Library variable names have been changed to `lib{name}` pattern
following their filename pattern.
- `shared` prefix has been removed from all variables using it.
- `version_header` its related configuration `version_conf`
variables have been renamed to `nm_version_macro*` following
its input and final file names.