Similarly to what systemd-resolved does, introduce the concept of
"routing" domain, which is a domain in the search list that is used
only to decide the interface over which a query must be forwarded, but
is not used to complete unqualified host names. Routing domains are
those starting with a tilde ('~') before the actual domain name.
Domains without the initial tilde are used both for completing
unqualified names and for the routing decision.
The "domain" key of the D-Bus configuration dictionary specifies the
domains a configuration applies to. In DNS code we consider domains
and searches as equivalent, so they should be exported via D-Bus using
the same logic used to populate resolv.conf and for plugins.
When NM knows of the ifindex/name of the new PPP interface (through
the SetIfindex() call), it renames it. This can race with the pppd
daemon, which issues ioctl() using the interface name cached in the
global 'ifname' variable:
...
NetworkManager[27213]: <debug> [1515427406.0036] ppp-manager: set-ifindex 71
pppd[27801]: sent [CCP ConfRej id=0x1 <deflate 15> <deflate(old#) 15> <bsd v1 15>]
NetworkManager[27213]: <debug> [1515427406.0036] platform: link: setting 'ppp5' (71) name dsl-ppp
pppd[27801]: sent [IPCP ConfAck id=0x2 <addr 3.1.1.1>]
pppd[27801]: ioctl(SIOCSIFADDR): No such device (line 2473)
pppd[27801]: Interface configuration failed
pppd[27801]: Couldn't get PPP statistics: No such device
...
Fortunately the variable is exposed to plugins and so we can turn the
SetIfindex() D-Bus call into a synchronous one and then update the
value of the 'ifname' global variable with the new interface name
assigned by NM.
If IPV6CP terminates before IPCP, pppd enters the RUNNING phase and we
start IP configuration without having an IP interface set, which
triggers assertions.
Instead, add a SetIfindex() D-Bus method that gets called by the
plugin when pppd becomes RUNNING. The method sets the IP ifindex of
the device and starts IP configuration.
https://bugzilla.redhat.com/show_bug.cgi?id=1515829
We also unconditionally use them with autotools.
Also, the detection for have_version_script does
not seem correct to me. At least, it didn't work
with clang.
The strings holding the names used for libraries have also been
moved to different variables. This way they would be less error
as these variables can be reused easily and any typing error
would be quickly detected.
Some targets are missing dependencies on some generated sources in
the meson port. These makes the build to fail due to missing source
files on a highly parallelized build.
These dependencies have been resolved by taking advantage of meson's
internal dependencies which can be used to pass source files,
include directories, libraries and compiler flags.
One of such internal dependencies called `core_dep` was already in
use. However, in order to avoid any confusion with another new
internal dependency called `nm_core_dep`, which is used to include
directories and source files from the `libnm-core` directory, the
`core_dep` dependency has been renamed to `nm_dep`.
These changes have allowed minimizing the build details which are
inherited by using those dependencies. The parallelized build has
also been improved.
In some cases we might want to load device plugins from multiple
directories. A special case that I have in mind is to load plugins from
build directory subdirectories in order to run NetworkManager from the
build directory.
[thaller@redhat.com: modify original patch]
The connection.mdns setting is a per-connection setting,
so one might expect that one activated device can only have
one MDNS setting at a time.
However, with certain VPN plugins (those that don't have their
own IP interface, like libreswan), the VPN configuration is merged
into the configuration of the device. So, in this case, there
might be multiple settings for one device that must be merged.
We already have a mechanism for that. It's NMIP4Config. Let NMIP4Config
track this piece of information. Although, stricitly speaking this
is not tied to IPv4, the alternative would be to introduce a new
object to track such data, which would be a tremendous effort
and more complicated then this.
Luckily, NMDnsManager and NMDnsPlugin are already equipped to
handle multiple NMIPConfig instances per device (IPv4 vs. IPv6,
and Device vs. VPN).
Also make "connection.mdns" configurable via global defaults in
NetworkManager.conf.
Don't track the per-device configuration in NMDnsManager by
the ifname, but by the ifindex. We should consistently treat
the ifindex as the ID of a link, like kernel does.
At the few places where we actually need the ifname, resolve
it by looking into the platform cache. That is not necessarily
the same as the ifname that is currently tracked by NMDevice,
because netdev interfaces can be renamed, and NMDevice updates
it's link properties delayed. However, the platform cache has
the most recent notion of the correct interface name for an
ifindex, so if we ever hit a race here, we do it now more
correctly.
This also temporarily drops support for mdns. Will be re-added next,
but differently.
We had two separate queues, one for "SetLinkDNS" and one for
"SetLinkDomains". Merge them into one, and track the operation
as part of the new RequestItem structure.
A visible change to before is that we now would make all requests
per-interface first. Prevously, we would first make all SetLinkDNS
requests (for all interfaces) and then all SetLinkDomains requests.
It feels more correct to order the requests this way, not by
type.
The reason to merge is, that we will next get another operation
and in the current scheme we would need 3 GQueue instances.
While at it, refactor the code to use CList. We now anyway would
need a new struct to track the operation, requiring to allocate
and free it. Previously, we would only track the GVariant argument
as data of the GQueue.
Use a GHashTable instead of a GArray to construct the list of
@interfaces. Also, use NMCListElem instead of GList. With this,
the runtime is O(n*log(n)) instead of O(n^2).
I belive, we should take care that all our code has a reasonable
runtime complexity, even in common use-cases the number of elements
is small. This is not about performace, because likely we expect few
entries anyway, and the direct GArray implementation is likely faster
in those cases. It's about using the data structure that best suits the
access pattern.
The log(n) part comes from sorting the keys. I also believe we should
always aim for a stable behavior. When sending the D-Bus request to
resolved, the order of elements should be in ~some~ defined order.
"UNKNOWN" is not a good name. If you don't set the property
in the connection explicitly, it should be "DEFAULT".
Also, make "DEFAULT" -1. For one, that ensures that the enum's
underlying integer type is signed. Otherwise, it's cumbersome
to test "if (mdns >= DEFAULT)" because in case of unsigned types,
the compiler will warn about the check always being true.
Also, it allows for "NO" to be zero. These are no strong reasons,
but I tend to think this is better.
Also, don't make the property of NMSettingConnection a CONSTRUCT property.
Initialize the default manually in the init function.
Also, order the numeric values so that DEFAULT < NO < RESOLVE < YES with
YES being largest because it enables *the most*.
Update nm-policy.c and nm-dns-manager.c so that the connection-specific
settings get propagated to DNS manger. Currently the only such value is
the mDNS status.
Add update_mdns() function to DNS plugin interface. If a DNS plugin
supports mDNS, it can set an interface with a given index to support
mDNS resolving or also register the current hostname.
The mDNS support is currently added only to systemd-resolved DNS plugin.
Kernel (as of 4.14) merely ACKs our RTM_DELQDISC and RTM_DELTFILTER, not
bothering to signal the full RTM_DEL* message unless the removal is
external to NetworkManager.
https://bugzilla.redhat.com/show_bug.cgi?id=1527197
Tests are commonly created via copy&paste. Hence, it's
better to express a certain concept explicitly via a function
or macro. This way, the implementation of the concept can be
adjusted at one place, without requiring to change all the callers.
Also, the macro is shorter, and brevity is better for tests
so it's easier to understand what the test does. Without being
bothered by noise from the redundant information.
Also, the macro knows better which message to expect. For example,
messages inside "src" are prepended by nm-logging.c with a level
and a timestamp. The expect macro is aware of that and tests for it
#define NMTST_EXPECT_NM_ERROR(msg) NMTST_EXPECT_NM (G_LOG_LEVEL_MESSAGE, "*<error> [*] "msg)
This again allows the caller to ignore this prefix, but still assert
more strictly.
Note that:
- we compile some source files multiple times. Most notably those
under "shared/".
- we include a default header "shared/nm-default.h" in every source
file. This header is supposed to setup a common environment by defining
and including parts that are commonly used. As we always include the
same header, the header must behave differently depending
one whether the compilation is for libnm-core, NetworkManager or
libnm-glib. E.g. it must include <glib/gi18n.h> or <glib/gi18n-lib.h>
depending on whether we compile a library or an application.
For that, the source files need the NETWORKMANAGER_COMPILATION #define
to behave accordingly.
Extend the define to be composed of flags. These flags are all named
NM_NETWORKMANAGER_COMPILATION_WITH_*, they indicate which part of the
build are available. E.g. when building libnm-core.la itself, then
WITH_LIBNM_CORE, WITH_LIBNM_CORE_INTERNAL, and WITH_LIBNM_CORE_PRIVATE
are available. When building NetworkManager, WITH_LIBNM_CORE_PRIVATE
is not available but the internal parts are still accessible. When
building nmcli, only WITH_LIBNM_CORE (the public part) is available.
This granularily controls the build.
The logging macros already prepend a "config: " prefix. Don't
repeat that in the message, otherwise we get
config: config: signal SIGHUP (no changes from disk)
Now:
config: signal: SIGHUP (no changes from disk)
The internal client asserts that the length of the client ID is not more
than MAX_CLIENT_ID_LEN. Avoid that assert by truncating the string.
Also add new nm_dhcp_client_set_client_id_*() setters, that either
set the ID based on a string (in our common dhclient specific
format), or based on the binary data (as obtained from systemd client).
Also, add checks and assertions that the client ID which is
set via nm_dhcp_client_set_client_id() is always of length
of at least 2 (as required by rfc2132, section-9.14).
NMDhcpManager used a hash table to keep track of the dhcp client
instances. It never actually did a lookup of the client, the only
place where we search for an existing NMDhcpClient instance is
get_client_for_ifindex(), which just iterated over all clients.
Use a CList instead.
The only thing that one might consider a downside is that now
NMDhcpClient is aware of whether it is part of a list. Previously,
one could theoretically track a NMDhcpClient instance in multiple
NMDhcpManager instances. But that doesn't make sense, because
NMDhcpManager is a singleton. Even if we would have mulitple NMDhcpManager
instances, one client would still only be tracked by one manager.
This tighter coupling of NMDhcpClient and NMDhcpManager isn't
a problem.
Instead, intern the string and cache it in the NMDeviceClass instance.
It anyway depends entirely on the GObject type (name), hence it should
also be cached at the type.
nm_match_spec_device_by_pllink() does not support matching on all parameters,
unlike nm_match_spec_device(). The reason is that certain parameters are
only available when having a NMDevice instance.
Add an argument "match_device_type", so that the caller can inject the
device type to be used. Note that for NMDevice, the device-type is
nm_device_get_type_description(), which usually depends on the device
class only. The only caller of nm_match_spec_device_by_pllink() is the
wifi factory, and it already knows that it wants to create a device of
type NMDeviceWifi. Hence, it knows and can specify "wifi" as
match_device_type.