In some cases we might want to load device plugins from multiple
directories. A special case that I have in mind is to load plugins from
build directory subdirectories in order to run NetworkManager from the
build directory.
[thaller@redhat.com: modify original patch]
Many of the configured directories default to being defined using existing
directory configuration. As a result you usually don't see the actual
directories that will be used. With the added directories you can at least
assemble the information and thus see which directories will be used.
Although it is possible to generate distributable files on meson
since version 0.41 by using the `ninja dist` command, autotools does
different things that end up creating a different distributable
file.
meson build files have been added to autotools build files as
distributable files, so the whole meson port would also be
distributed.
https://mail.gnome.org/archives/networkmanager-list/2018-January/msg00047.html
The connection.mdns setting is a per-connection setting,
so one might expect that one activated device can only have
one MDNS setting at a time.
However, with certain VPN plugins (those that don't have their
own IP interface, like libreswan), the VPN configuration is merged
into the configuration of the device. So, in this case, there
might be multiple settings for one device that must be merged.
We already have a mechanism for that. It's NMIP4Config. Let NMIP4Config
track this piece of information. Although, stricitly speaking this
is not tied to IPv4, the alternative would be to introduce a new
object to track such data, which would be a tremendous effort
and more complicated then this.
Luckily, NMDnsManager and NMDnsPlugin are already equipped to
handle multiple NMIPConfig instances per device (IPv4 vs. IPv6,
and Device vs. VPN).
Also make "connection.mdns" configurable via global defaults in
NetworkManager.conf.
Don't track the per-device configuration in NMDnsManager by
the ifname, but by the ifindex. We should consistently treat
the ifindex as the ID of a link, like kernel does.
At the few places where we actually need the ifname, resolve
it by looking into the platform cache. That is not necessarily
the same as the ifname that is currently tracked by NMDevice,
because netdev interfaces can be renamed, and NMDevice updates
it's link properties delayed. However, the platform cache has
the most recent notion of the correct interface name for an
ifindex, so if we ever hit a race here, we do it now more
correctly.
This also temporarily drops support for mdns. Will be re-added next,
but differently.
We had two separate queues, one for "SetLinkDNS" and one for
"SetLinkDomains". Merge them into one, and track the operation
as part of the new RequestItem structure.
A visible change to before is that we now would make all requests
per-interface first. Prevously, we would first make all SetLinkDNS
requests (for all interfaces) and then all SetLinkDomains requests.
It feels more correct to order the requests this way, not by
type.
The reason to merge is, that we will next get another operation
and in the current scheme we would need 3 GQueue instances.
While at it, refactor the code to use CList. We now anyway would
need a new struct to track the operation, requiring to allocate
and free it. Previously, we would only track the GVariant argument
as data of the GQueue.
Use a GHashTable instead of a GArray to construct the list of
@interfaces. Also, use NMCListElem instead of GList. With this,
the runtime is O(n*log(n)) instead of O(n^2).
I belive, we should take care that all our code has a reasonable
runtime complexity, even in common use-cases the number of elements
is small. This is not about performace, because likely we expect few
entries anyway, and the direct GArray implementation is likely faster
in those cases. It's about using the data structure that best suits the
access pattern.
The log(n) part comes from sorting the keys. I also believe we should
always aim for a stable behavior. When sending the D-Bus request to
resolved, the order of elements should be in ~some~ defined order.
A cmp() implementation, for sorting an array with pointers, where each
pointer is an inteter according to GPOINTER_TO_INT().
That cames for example handy, if you have a GHashTable with keys
GINT_TO_POINTER(). Then you get the list of keys via
g_hash_table_get_keys_as_array() and want to sort them.
Sometimes, we want to use CList to track a simple data item. But contrary
to GList/GSList, we need to define a structure to hold the data pointer
and the CList member.
Add a generic NMCListElem type that can be used for such simple uses.
Before you ask: why not use GList/GSList? Because even simple operations
like g_list_append() is O(n), which kinda defeats the purpose of having
a doubly linked list.
This code is added to a new header file nm-c-list.h, the reason is that
there is no other good place:
- "nm-utils/c-list.h" is a clone of upstream, it should not deviate.
- "nm-utils/c-list-util.h" contains our utils functions for c-list.h
but should be plain C, independent of glib.
- "nm-utils/nm-shared-utils.h" contains our glib related utilities,
but it should not drag in "c-list.h".
So, "nm-c-list.h" is a utility libray that extends "c-list.h" and
requires glib.
"UNKNOWN" is not a good name. If you don't set the property
in the connection explicitly, it should be "DEFAULT".
Also, make "DEFAULT" -1. For one, that ensures that the enum's
underlying integer type is signed. Otherwise, it's cumbersome
to test "if (mdns >= DEFAULT)" because in case of unsigned types,
the compiler will warn about the check always being true.
Also, it allows for "NO" to be zero. These are no strong reasons,
but I tend to think this is better.
Also, don't make the property of NMSettingConnection a CONSTRUCT property.
Initialize the default manually in the init function.
Also, order the numeric values so that DEFAULT < NO < RESOLVE < YES with
YES being largest because it enables *the most*.
Also, keep the internal variable of type int. The only way to set the
field is via the GObject property setter. At that point, don't yet
cast the integer type to enum.
Update nm-policy.c and nm-dns-manager.c so that the connection-specific
settings get propagated to DNS manger. Currently the only such value is
the mDNS status.
Add update_mdns() function to DNS plugin interface. If a DNS plugin
supports mDNS, it can set an interface with a given index to support
mDNS resolving or also register the current hostname.
The mDNS support is currently added only to systemd-resolved DNS plugin.
When doing changes that affect multiple source files, it's more
convenient to build the parts that have less dependencies first.
So, to fix the build failures from the core outward.
In a recent commit 1402fa7487 a new
way for testing Red Hat compatible distributions had been added.
However, this new approach does not use a set of files, it uses a
directory, so this test can be done by using the `test` command
and makes the `check_distro.py` script unnecessary.
https://mail.gnome.org/archives/networkmanager-list/2018-January/msg00031.html
Kernel (as of 4.14) merely ACKs our RTM_DELQDISC and RTM_DELTFILTER, not
bothering to signal the full RTM_DEL* message unless the removal is
external to NetworkManager.
https://bugzilla.redhat.com/show_bug.cgi?id=1527197
These files are a template how to add a new nm-setting-* implementation.
We are not going to add new files to the deprecated libnm-util library,
hence a template for libnm-util is pointless.
libnm-core doesn't have a corresponding template file. Personally, I
don't think such a template are a great idea either, because
- People are not aware that it exists. Hence, it's unused, badly
maintained and quite possibly does not follow current best practice.
- Just copy an actual settings implementation and start from there.
That seems better to me than having a template.
Tests are commonly created via copy&paste. Hence, it's
better to express a certain concept explicitly via a function
or macro. This way, the implementation of the concept can be
adjusted at one place, without requiring to change all the callers.
Also, the macro is shorter, and brevity is better for tests
so it's easier to understand what the test does. Without being
bothered by noise from the redundant information.
Also, the macro knows better which message to expect. For example,
messages inside "src" are prepended by nm-logging.c with a level
and a timestamp. The expect macro is aware of that and tests for it
#define NMTST_EXPECT_NM_ERROR(msg) NMTST_EXPECT_NM (G_LOG_LEVEL_MESSAGE, "*<error> [*] "msg)
This again allows the caller to ignore this prefix, but still assert
more strictly.
Note that:
- we compile some source files multiple times. Most notably those
under "shared/".
- we include a default header "shared/nm-default.h" in every source
file. This header is supposed to setup a common environment by defining
and including parts that are commonly used. As we always include the
same header, the header must behave differently depending
one whether the compilation is for libnm-core, NetworkManager or
libnm-glib. E.g. it must include <glib/gi18n.h> or <glib/gi18n-lib.h>
depending on whether we compile a library or an application.
For that, the source files need the NETWORKMANAGER_COMPILATION #define
to behave accordingly.
Extend the define to be composed of flags. These flags are all named
NM_NETWORKMANAGER_COMPILATION_WITH_*, they indicate which part of the
build are available. E.g. when building libnm-core.la itself, then
WITH_LIBNM_CORE, WITH_LIBNM_CORE_INTERNAL, and WITH_LIBNM_CORE_PRIVATE
are available. When building NetworkManager, WITH_LIBNM_CORE_PRIVATE
is not available but the internal parts are still accessible. When
building nmcli, only WITH_LIBNM_CORE (the public part) is available.
This granularily controls the build.
The logging macros already prepend a "config: " prefix. Don't
repeat that in the message, otherwise we get
config: config: signal SIGHUP (no changes from disk)
Now:
config: signal: SIGHUP (no changes from disk)
The internal client asserts that the length of the client ID is not more
than MAX_CLIENT_ID_LEN. Avoid that assert by truncating the string.
Also add new nm_dhcp_client_set_client_id_*() setters, that either
set the ID based on a string (in our common dhclient specific
format), or based on the binary data (as obtained from systemd client).
Also, add checks and assertions that the client ID which is
set via nm_dhcp_client_set_client_id() is always of length
of at least 2 (as required by rfc2132, section-9.14).
NMDhcpManager used a hash table to keep track of the dhcp client
instances. It never actually did a lookup of the client, the only
place where we search for an existing NMDhcpClient instance is
get_client_for_ifindex(), which just iterated over all clients.
Use a CList instead.
The only thing that one might consider a downside is that now
NMDhcpClient is aware of whether it is part of a list. Previously,
one could theoretically track a NMDhcpClient instance in multiple
NMDhcpManager instances. But that doesn't make sense, because
NMDhcpManager is a singleton. Even if we would have mulitple NMDhcpManager
instances, one client would still only be tracked by one manager.
This tighter coupling of NMDhcpClient and NMDhcpManager isn't
a problem.
This is still the very same approach (in the way the array is split
and how elements are compared). The only difference is that the
recursive implementation is replaced by a non-recursive one.
It's (still) stable, top-down merge-sort.
The non-recursive implementation better, because it avoids the overhead
of the function call to recurse.
We don't have a complete list of distributions that use ifcfg for network
configuration but we can easily check for `/etc/sysconfig/network-scripts`
and then distributions can explicitly disable the plugin of that is what
they want.
[thaller@redhat.com: cherry-picked and adjusted for rebase from
https://github.com/NetworkManager/NetworkManager/pull/49]
In future we should probably either switch to `/etc/os-release` or just
check for `/etc/sysconfig` or something like that.
[thaller@redhat.com: cherry-picked and adjusted for rebase from
https://github.com/NetworkManager/NetworkManager/pull/49]
It is slightly less obsolete. We don't really support precise (12.04)
anyway, the build only works because we steal libndp and perhaps more
from the trusty repository.
gtkdoc uses some custom generated targets as content files. However,
there are still two problem. The first is that gtkdoc does not
support targets which are not strings. This is being fixed in the
following issue:
https://github.com/mesonbuild/meson/pull/2806
The second issue is that the gtkdoc function produces a target which
is triggered at install time. This makes the dependencies generation
to not be triggered.
This patch uses a workaround for that second issue.
https://mail.gnome.org/archives/networkmanager-list/2017-December/msg00079.html