Not very useful, but it seems nicer to read. They anyway can be
inlined. After all, naming and structure is important and the places
where we emit signals are important. By having well-named helper
functions, these places are easier to find and reason about.
(cherry picked from commit 60957a4c8a)
NMConnection is a glib interface, implemented only by NMSimpleConnection
and NMRemoteConnection.
Inside the daemon, every NMConnection instance is always a NMSimpleConnection.
Using glib interfaces has an overhead, for example NM_IS_CONNECTION() needs
to search the implemented types for the pointer. And NM_CONNECTION_GET_PRIVATE()
is implemented by attaching user data to the GObject instance. Both have measurable
overhead.
Special case them for NMSimpleConnection.
This optimizes primarily the call to nm_connection_get_setting_connection(),
which easily gets called millions of times. This is easily measurable.
(cherry picked from commit 7a71aedf46)
The NM_TYPE_SETTING_* macros are really function calls (to a GType/gsize which is
guarded by an atomic operation for thread safe initialization). Also, finding
the setting_info based on the GType requires additional lookups.
It's no longer necessary. We can directly find the setting using the
well known index.
(cherry picked from commit 97eef2bf6d)
A NMConnection tracks a list of NMSetting instances. For
each setting type, it only can track one instance, as is
clear by the API nm_connection_get_setting().
The number of different setting types is known at compile time,
currently it is 52. Also, we have an NMMetaSettingType enum,
which assigns each type a number.
Previously, we were tracking the settings in a GHashTable.
Rework that, to instead use a fixed size array.
Now every NMConnection instance consumes 52 * sizeof(pointer)
for the settings array. Previously, the GHashTable required to malloc
the "struct _GHashTable" (on 64bit that is about the size of 12
pointers) and for N settings it allocated two buffers (for
the key and the values) plus one buffer for the hash values. So,
it may or may not consume a bit more memory now, but also can lookup
settings directly without hashing.
When looking at all settings, we iterate the entire array. Most
entries will be NULL, so it's a question whether this could be done
better. But as the array is of a fixed, small size, naive iteration
is probably still faster and simpler than anything else.
---
Test: compiled with -O2, x86_64:
$ T=src/core/settings/plugins/ifcfg-rh/tests/test-ifcfg-rh; \
make -j 8 "$T" && \
"$T" 1>/dev/null && \
perf stat -r 200 -B "$T" 1>/dev/null
Before:
Performance counter stats for 'src/core/settings/plugins/ifcfg-rh/tests/test-ifcfg-rh' (200 runs):
338.39 msec task-clock:u # 0.962 CPUs utilized ( +- 0.68% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
1,121 page-faults:u # 0.003 M/sec ( +- 0.03% )
1,060,001,815 cycles:u # 3.132 GHz ( +- 0.50% )
1,877,905,122 instructions:u # 1.77 insn per cycle ( +- 0.01% )
374,065,113 branches:u # 1105.429 M/sec ( +- 0.01% )
6,862,991 branch-misses:u # 1.83% of all branches ( +- 0.36% )
0.35185 +- 0.00247 seconds time elapsed ( +- 0.70% )
After:
Performance counter stats for 'src/core/settings/plugins/ifcfg-rh/tests/test-ifcfg-rh' (200 runs):
328.07 msec task-clock:u # 0.959 CPUs utilized ( +- 0.39% )
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
1,130 page-faults:u # 0.003 M/sec ( +- 0.03% )
1,034,858,368 cycles:u # 3.154 GHz ( +- 0.33% )
1,846,714,951 instructions:u # 1.78 insn per cycle ( +- 0.00% )
369,754,267 branches:u # 1127.052 M/sec ( +- 0.01% )
6,594,396 branch-misses:u # 1.78% of all branches ( +- 0.23% )
0.34193 +- 0.00145 seconds time elapsed ( +- 0.42% )
(cherry picked from commit 91aacbef41)
So far, we didn't verify the secondary connections at all.
But these really are supposed to be UUIDs.
As we now also normalize "connection.uuid" to be in a strict
format, the user might have profiles with non-normalized UUIDs.
In that case, the "connection.uuid" would be normalized, but
"connection.secondaries" no longer matches. We can fix that by
also normalizing "connection.secondaries". OK, this is not a very good
reason, because it's unlikely to affect any users in practice ('though
it's easy to reproduce).
A better reason is that the secondary setting really should be well
defined and verified. As we didn't do that so far, we cannot simply
outright reject invalid settings. What this patch does instead, is
silently changing the profile to only contain valid settings.
That has it's own problems, like that the user setting an invalid
value does not get an error nor the desired(?) outcome.
But of all the bad choices, normalizing seems the most sensible
one.
Note that in practice, most client applications don't rely on setting
arbitrary (invalid) "UUIDs". They simply expect to be able to set valid
UUIDs, which they still are. For example, nm-connection-editor presents
a drop down list of VPN profile, and nmcli also resolves connection IDs
to the UUID. That is, clients already have an intimate understanding of
this setting, and don't blindly set arbitrary values. Hence, this
normalization is unlikely to hit users in practice. But what it gives
is the guarantee that a verified connection only contains valid UUIDs.
Now all UUIDs will be normalized, invalid entries removed, and the list
made unique.
For NetworkManager profiles, "connection.uuid" is the identifier of the
profile. It is supposed to be a UUID, however:
- the UUID was not ensured to be all-lower case. We should make sure
that our UUIDs are in a consistent manner, so that users can rely
on the format of the string.
- the UUID was never actually interpreted as a UUID. It only was some
opaque string, that we use as identifier. We had nm_utils_is_uuid()
which checks that the format is valid, however that did not fully
validate the format, like it would accept "----7daf444dd78741a59e1ef1b3c8b1c0e8"
and "549fac10a25f4bcc912d1ae688c2b4987daf444d" (40 hex characters).
Both invalid UUIDs and non-normalized UUID should be normalized. We
don't want to break existing profiles that use such UUIDs, thus we don't
outright reject them. Let's instead mangle them during
nm_connection_normalize().
"libnm-core/" is rather complicated. It provides a static library that
is linked into libnm.so and NetworkManager. It also contains public
headers (like "nm-setting.h") which are part of public libnm API.
Then we have helper libraries ("libnm-core/nm-libnm-core-*/") which
only rely on public API of libnm-core, but are themself static
libraries that can be used by anybody who uses libnm-core. And
"libnm-core/nm-libnm-core-intern" is used by libnm-core itself.
Move "libnm-core/" to "src/". But also split it in different
directories so that they have a clearer purpose.
The goal is to have a flat directory hierarchy. The "src/libnm-core*/"
directories correspond to the different modules (static libraries and set
of headers that we have). We have different kinds of such modules because
of how we combine various code together. The directory layout now reflects
this.
2021-02-18 19:46:51 +01:00
Renamed from libnm-core/nm-connection.c (Browse further)