"ipv6.method=ignore" really exists for historic reasons, from a time when
NetworkManager didn't support IPv6 autoconf and let kernel handle it.
Nowadays, we should choose an explicit mode, like "link-local" or
"disabled".
Let nm_connection_normalize() treat WireGuard and dummy profiles
different and set the IPv6 method to "disabled".
On a dummy device we cannot do DHCP. The default makes no sense.
This also affects `nmcli device connect dummy0`. We want that the
generated profile gets normalized to no IP configuration, because
DHCP/autoconf is not working on a dummy device.
Currently there is another problem and that command is not working. But
if that other problem would be fixed, then the generated profile would try
to do DHCP, fail, and retry endlessly (with backoff pauses).
That endless loop is a third problem. If `nmcli device connect` creates
a new profile, then upon failure the profile should be deleted again.
But these two other problems are not solved hereby.
I guess, to a certain point these normalization options are hardly used.
Still, it feels right to also support it for IPv4. These options make
sense to me to control normalization.
Add a new property to specify the minimum time interval in
milliseconds for which dynamic IP configuration should be tried before
the connection succeeds.
This property is useful for example if both IPv4 and IPv6 are enabled
and are allowed to fail. Normally the connection succeeds as soon as
one of the two address families completes; by setting a required
timeout for e.g. IPv4, one can ensure that even if IP6 succeeds
earlier than IPv4, NetworkManager waits some time for IPv4 before the
connection becomes active.
"wifi.seen-bssid" is an unusual property, therefore very ugly due to the
inconsistency.
It is not a regular user configuration that makes sense to store to
disk or modify by the user. It gets populated by the daemon, and
stored in "/var/lib/NetworkManager/seen-bssids" file.
As such, how to convert this to/from D-Bus needs special handling.
This means, that the to/from D-Bus functions will only serialize the
property when the seen-bssids are specified via
NMConnectionSerializationOptions, which is what the daemon does.
Also, the daemon ignores seen-bssids when parsing the variant.
This has the odd effect that when the client converts a setting to
GVariant, the seen-bssids gets lost. That means, a conversion to GVariant
and back looses information. I think that is OK in this case, because the
main point of to/from D-Bus is not to have a lossless GVariant representation
of a setting, but to transfer the setting via D-Bus between client and
daemon. And transferring seen-bssids via D-Bus makes only sense from the daemon
to the client.
"seen-bssids" primarily gets stored to "/var/lib/NetworkManager/seen-bssids",
it's not a regular property.
We want this property to be serialized/deserialized to/from GVariant,
because we expose these settings on the API like a property of the
profile. But it cannot be modified via nmcli, it cannot be stored
to ifcfg files, and it makes not sense to store it to keyfile either.
Stop doing that.
clang 3.4.2-9.el7 dislikes expressions in the form
int v;
struct {
typeof(({ v; })) _field;
} x;
error: statement expression not allowed at file scope
typeof( ({ v; }) ) _field;
^
That is, if the argument to typeof() is an expression statement. But this is
what
nm_hash_update_val(&h, ..., NM_HASH_COMBINE_BOOLS(...))
expands to. Rework NM_HASH_COMBINE_BOOLS() to avoid the expression statement.
We still have the static assertion for the size of the return type.
We no longer have the _nm_hash_combine_bools_type typedef. It really
wasn't needed, and the current variant is always safe without it.
Fixes: 23adeed244 ('glib-aux: use NM_VA_ARGS_FOREACH() to implement NM_HASH_COMBINE_BOOLS()')
Usually, properties that are set to their default are not serialized on
D-Bus. That is, to_dbus_fcn() returns NULL.
In some cases, we explicitly want to always serialize the property. For
example, if we changed behavior and the libnm default value changed.
Then we want that the message on D-Bus is always clear about the used
value and not rely on the default value on the receiving side.
Most of our NMSetting properties are based around GObject properties,
and thus the tooling to convert a NMSetting to/from GVariant consists
of getting/setting a GValue.
We can do better.
For most of such properties we also define a C getter function, which
we can call with less overhead. All we need is to hook the C getter with
the property meta data.
As example, implement it for "connection.autoconnect".
The immediate goal of this is to reduce the overhead of to_dbus. But
note that also for comparison of two properties, there is the default
implementation which is used by the majority of properties. This
implementation converts the properties first to GVariant (via
to_dbus_fcn) and then compares the variants. What this commit also does,
is to hook up the property meta data with the C-getters. This is one step
towards also more efficiently compare properties using the naive C
getters. Likewise, the keyfile writer use g_object_get_property().
It also could do better.
For each property we have meta data in form of NMSettInfoProperty.
Each meta data also has a NMSettInfoProperty.property_type
(NMSettInfoPropertType).
The property type is supposed to define common behaviors for properties,
while the property meta data has individual properties. The idea is that
several properties can share the same property-type, and that
per-property meta data is part of NMSettInfoProperty.
The distinction is not very strong, but note that all remaining uses
of NMSettInfoPropertType.gprop_to_dbus_fcn were part of a property
type that was only used for one single property. That lack of
reusability hints to a wrong use.
Move gprop_to_dbus_fcn to the property meta data as a new field
NMSettInfoProperty.to_dbus_data.
Note that NMSettInfoPropertType.gprop_from_dbus_fcn still suffers from
the same problem. But the from-dbus side is not yet addressed.
For GBytes, GEnum, GFlags and others, we need special converters from the
default GObject properties to GVariant.
Previously, those were implemented by providing a special
gprop_to_dbus_fcn hook. But gprop_to_dbus_fcn should move
from NMSettInfoPropertType to NMSettInfoProperty, because it's
usually a per-property meta data, and not a per-property-type meta data.
The difference is whether the meta data can be shared between different
properties (of the same "type).
In these cases, this extra information is indeed part of the type.
We want to have a generic NM_SETT_INFO_PROPERT_TYPE_GPROP() property
(using _nm_setting_property_to_dbus_fcn_gprop()), but then we would like
to distinguish between special cases. So this was fine.
However, I find the approach of providing a gprop_to_dbus_fcn in this
case cumbersome. It makes it harder to understand what happens. Instead,
introduce a new "gprop_type" for the different types that
_nm_setting_property_to_dbus_fcn_gprop() can handle.
This new "gprop_type" is extra data of the property type, so
introduce a new field "typdata_to_dbus".
The advantage is that we use similar macros for initializing the
static structs like
const NMSettInfoPropertType nm_sett_info_propert_type_cloned_mac_address;
and the ad-hoc locations that use NM_SETT_INFO_PROPERT_TYPE().
The former exist for property types that are used more than once.
The latter exist for convenience, where a property type is implemented
at only one place.
Also, there are few direct references to _nm_setting_property_to_dbus_fcn_gprop().
all users use NM_SETT_INFO_PROPERT_TYPE_GPROP() or
NM_SETT_INFO_PROPERT_TYPE_GPROP_INIT().
If a property can be converted to D-Bus, then always set the
to_dbus_fcn() handler. The only caller of to_dbus_fcn() is
property_to_dbus(), so this means that property_to_dbus()
has no more default implementation and always delegates to
to_dbus_fcn().
The code is easier to understand if all properties implement
to_dbus_fcn() the same way.
Also, there is supposed to be a split between NMSettInfoProperty (info about
the property) and NMSettInfoPropertType (the type). The idea is that
each property (obviously) requires its distinct NMSettInfoProperty, but
they can share a common type implementation.
With NMSettInfoPropertType.gprop_to_dbus_fcn that is often violated because
many properties that implement NMSettInfoPropertType.gprop_to_dbus_fcn
require a special type implementation. As such, gprop_to_dbus_fcn should
be part of the property info and not the property type. The first step towards
that is unifying all properties to use to_dbus_fcn().
The buffer created here is only temporary to construct the property info
by _nm_setting_class_commit_full(). We can afford to allocate more than
necessary, if we thereby avoid several reallocations.
Not very useful, but it seems nicer to read. They anyway can be
inlined. After all, naming and structure is important and the places
where we emit signals are important. By having well-named helper
functions, these places are easier to find and reason about.
NMConnection is a glib interface, implemented only by NMSimpleConnection
and NMRemoteConnection.
Inside the daemon, every NMConnection instance is always a NMSimpleConnection.
Using glib interfaces has an overhead, for example NM_IS_CONNECTION() needs
to search the implemented types for the pointer. And NM_CONNECTION_GET_PRIVATE()
is implemented by attaching user data to the GObject instance. Both have measurable
overhead.
Special case them for NMSimpleConnection.
This optimizes primarily the call to nm_connection_get_setting_connection(),
which easily gets called millions of times. This is easily measurable.
The NM_TYPE_SETTING_* macros are really function calls (to a GType/gsize which is
guarded by an atomic operation for thread safe initialization). Also, finding
the setting_info based on the GType requires additional lookups.
It's no longer necessary. We can directly find the setting using the
well known index.
A NMConnection tracks a list of NMSetting instances. For
each setting type, it only can track one instance, as is
clear by the API nm_connection_get_setting().
The number of different setting types is known at compile time,
currently it is 52. Also, we have an NMMetaSettingType enum,
which assigns each type a number.
Previously, we were tracking the settings in a GHashTable.
Rework that, to instead use a fixed size array.
Now every NMConnection instance consumes 52 * sizeof(pointer)
for the settings array. Previously, the GHashTable required to malloc
the "struct _GHashTable" (on 64bit that is about the size of 12
pointers) and for N settings it allocated two buffers (for
the key and the values) plus one buffer for the hash values. So,
it may or may not consume a bit more memory now, but also can lookup
settings directly without hashing.
When looking at all settings, we iterate the entire array. Most
entries will be NULL, so it's a question whether this could be done
better. But as the array is of a fixed, small size, naive iteration
is probably still faster and simpler than anything else.
---
Test: compiled with -O2, x86_64:
$ T=src/core/settings/plugins/ifcfg-rh/tests/test-ifcfg-rh; \
make -j 8 "$T" && \
"$T" 1>/dev/null && \
perf stat -r 200 -B "$T" 1>/dev/null
Before:
Performance counter stats for 'src/core/settings/plugins/ifcfg-rh/tests/test-ifcfg-rh' (200 runs):
338.39 msec task-clock:u # 0.962 CPUs utilized ( +- 0.68% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
1,121 page-faults:u # 0.003 M/sec ( +- 0.03% )
1,060,001,815 cycles:u # 3.132 GHz ( +- 0.50% )
1,877,905,122 instructions:u # 1.77 insn per cycle ( +- 0.01% )
374,065,113 branches:u # 1105.429 M/sec ( +- 0.01% )
6,862,991 branch-misses:u # 1.83% of all branches ( +- 0.36% )
0.35185 +- 0.00247 seconds time elapsed ( +- 0.70% )
After:
Performance counter stats for 'src/core/settings/plugins/ifcfg-rh/tests/test-ifcfg-rh' (200 runs):
328.07 msec task-clock:u # 0.959 CPUs utilized ( +- 0.39% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
1,130 page-faults:u # 0.003 M/sec ( +- 0.03% )
1,034,858,368 cycles:u # 3.154 GHz ( +- 0.33% )
1,846,714,951 instructions:u # 1.78 insn per cycle ( +- 0.00% )
369,754,267 branches:u # 1127.052 M/sec ( +- 0.01% )
6,594,396 branch-misses:u # 1.78% of all branches ( +- 0.23% )
0.34193 +- 0.00145 seconds time elapsed ( +- 0.42% )