30. NetworkManager-1.9.2/src/settings/plugins/keyfile/nms-keyfile-writer.c:218:
check_return: Calling "g_mkdir_with_parents" without checking return
value (as is done elsewhere 4 out of 5
times).
25. NetworkManager-1.9.2/src/platform/nm-linux-platform.c:3969:
check_return: Calling "_nl_send_nlmsg" without checking return value (as
is done elsewhere 4 out of 5 times).
34. NetworkManager-1.9.2/src/nm-core-utils.c:2843:
negative_returns: "fd2" is passed to a parameter that cannot be negative.
26. NetworkManager-1.9.2/src/devices/wwan/nm-modem-broadband.c:897:
check_return: Calling "nm_utils_parse_inaddr_bin" without checking
return value (as is done elsewhere 4 out of 5 times).
3. NetworkManager-1.9.2/src/devices/bluetooth/nm-bluez5-manager.c:386:
check_return: Calling "g_variant_lookup" without checking return value
(as is done elsewhere 79 out of 83 times).
16. NetworkManager-1.9.2/libnm-util/nm-setting.c:405:
check_return: Calling "nm_g_object_set_property" without checking return
value (as is done elsewhere 4 out of 5 times).
and nm_utils_ip6_property_path(). The API with static buffers
looks a bit nicer. But I think they are dangerous, because
we tend to pass the buffer down several layers of the stack, and
it's not immediately clear, that we don't overwrite the static
buffer again (which we probably did not, but it's hard to verify
that there is no bug there).
In many scenarios, we have no use for the file descriptor
after nm_utils_fd_get_contents(). We just want to read it
and close it.
API wise, it would be nice that the get_contents() function never
closes the passed in fd and it's always responsibility of the caller.
However, that costs an additional dup() syscall that could
be avoided, if we allow the function to (optionally) close
the file descriptor.
The function should not close the input file descriptor; however
fdopen() associates the fd to the new stream so that when the stream
is closed, the fd is too. The result is a double close() and the
second call can in certain cases affect a wrong fd.
Use a duplicate fd for the stream.
Fixes: 1d9bdad1dfhttps://bugzilla.redhat.com/show_bug.cgi?id=1451236
By using a macro, we don't cast all the types to guint. Instead,
we use their native types directly. Hence, we don't need
nm_hash_update_uint64() nor nm_hash_update_ptr().
Also, for types smaller then guint like char, we save hashing
the all zero bytes.
The privious NM_HASH_* macros directly operated on a guint value
and were thus close to the actual implementation.
Replace them by adding a NMHashState struct and accessors to
update the hash state. This hides the implementation better
and would allow us to carry more state. For example, we could
switch to siphash24() transparently.
For now, we still do a form basically djb2 hashing, albeit with
differing start seed.
Also add nm_hash_str() and nm_str_hash():
- nm_hash_str() is our own string hashing implementation
- nm_str_hash() is our own string implementation, but with a
GHashFunc signature, suitable to pass it to g_hash_table_new().
Also, it has this name in order to remind you of g_str_hash(),
which it is replacing.
"nm-utils/nm-shared-utils.h" shall contain utility function without other
dependencies. It is intended to be used by other projects as-is.
nm_utils_random_bytes() requires getrandom() and a HAVE_GETRANDOM configure
check. That makes it more cumbersome to re-use "nm-shared-utils.h", in
cases where you don't care about nm_utils_random_bytes().
Split nm_utils_random_bytes() out to a separate file.
Same for hash utils, which depend on nm_utils_random_bytes(). Also, hash
utils will eventually be extended to use siphash24.
Introduce a NM_HASH_INIT() function. It makes the places
where we initialize a hash with a certain seed visually clear.
Also, move them from "shared/nm-utils/nm-shared-utils.h" to
"shared/nm-utils/nm-macros-internal.h". We might want to
have NM_HASH_INIT() non-inline (hence, define it in the
source file).
Add a new function nm_utils_random_bytes().
This function now preferably uses getrandom() syscall if it is
available.
As fallback, it always tries to fill the buffer from /dev/urandom.
If it cannot, as last fallback it uses GRand, which cannot fail.
Hence, the function always sets some (pseudo) random bytes.
It also returns FALSE if the obtained bytes are possibly not good
randomness.
We encounter the same enum in 3 forms:
- NMNDiscPreference in NetworkManager
- "enum ndp_route_preference" in <ndp.h>
- ICMPV6_ROUTER_PREF_* in <linux/icmpv6.h>
Move our enum to nm-core-utils.h, so that it can be used
by platform code as well (platform code should not include
ndisc/nm-ndisc.h).
Also, NMNDiscPreference was not numerically identical to their
native values (meaning: it shuffled the names and numbers).
Make them all numerically equal, so that they can be used in
the same context.
This means, while previously we could compare NMNDiscPreference
directly according to their priority, we now need _preference_to_priority().
On the other hand, we could omit translate_preference() -- but actually,
we still have _route_preference_coerce() because pref comes from libndp
and is thus untrusted. We still have to range check it.
- merge the IPv4 and IPv6 implementations. They are for the most
part identical. Also, they are independent of NMIP4Config/NMIP6Config.
- parse the entire file at once. Don't parse it twice, once for the
name servers and once for the options. This also avoids loading
/etc/resolv.conf twice, as it would be done before.
Allow passing a pretty name for the zero flag 0, like "none".
Also, don't require flags to be power-of-two. Instead, allow names for
multiple flags. For example an "all" name. By specifying multi-value
flags first, their nick will be supersede the more specific flags.
Probably it doesn't make sense in usual cases, but nm_utils_flags2str()
should prevent such use.
This will make us stop worry how relevant are chunks of compat code with
older kernels when deciding whether it's worth supporting/testing them.
As if we actually were testing old kernels.
For convenience, to clear the address inplace, allow to leave @src NULL,
instead of requiring to set @src to @dst.
The only problem is, if you make use of this extended behavior and later backport
the use to an older branch, ensure that you cherry-pick this commit too.
That is easy to miss, but you are testing the backport, right?
Using plain numbers make it cumbersome to grep for
setting types by priority.
The only downside is, that with the enum values it
is no longer obvious which value has higher or lower
priority.
Also, introduce NM_SETTING_PRIORITY_INVALID. This is what
_nm_setting_type_get_base_type_priority() returns. For the moment
it still has the same numerical value 0 as before. Later, that
shall be distinct from NM_SETTING_PRIORITY_CONNECTION.
The dad_counter is hashed into the resulting address. Since we
want the hashing to be independent of the architecture, we always
hash 32 bit of dad_counter. Make the dad_counter argument of
type guint32 for consistency.
In practice this has no effect because:
- for all our (current!) architectues, guint is the same as
guint32.
- all callers of nm_utils_ipv6_addr_set_stable_privacy() keep
their dad-counter argument as guint8, so they never even pass
numbers larger then 255.
- nm_utils_ipv6_addr_set_stable_privacy() limits dad_counter
further against RFC7217_IDGEN_RETRIES.
(cherry picked from commit 951e5f5bf8)
https://tools.ietf.org/html/rfc7217 says:
The resulting Interface Identifier SHOULD be compared against the
reserved IPv6 Interface Identifiers [RFC5453] [IANA-RESERVED-IID]
and against those Interface Identifiers already employed in an
address of the same network interface and the same network
prefix. In the event that an unacceptable identifier has been
generated, this situation SHOULD be handled in the same way as
the case of duplicate addresses (see Section 6).
In case of conflict, this suggests to create a new address incrementing
the DAD counter, etc. Don't do that. If we generate an address of the
reserved region, just rehash it right away. Note that the actual address
anyway appears random, so this re-hashing is just as good as incrementing
the DAD counter and going through the entire process again.
Note that now we no longer generate certain addresses like we did
previously. But realize that we now merely reject (1 + 16777216 + 128)
addresses out of 2^64. So, the likelyhood of of a user accidentally
generating an address that is suddenly rejected is in the order of
10e-13 (1 / 1,099,503,173,697). Which is not astronomically, but still
extreeeemely unlikely.
Also, the whole process is anyway build on the idea that somebody else
might generate conflicting addresses (DAD). It means, there was always
the extremely tiny chance that the address you generated last time is
suddenly taken by somebody else. So, this change appears to a user
like these reserved addresses are now claimed by another (non existing)
host and a different address gets generated -- business as usual, as
far as SLAAC is concerned.
(cherry picked from commit f15c4961ad)
nm_utils_exp10() is a better name, because it reminds of the function
exp10() from <math.h> which has a similar purpose (but whose argument
is double, not gint16).
We need a distinction between external activations and assuming
connections. The former shall have the meaning of devices that are
*not* managed by NetworkManager, the latter are configurations that
are gracefully taken over after restart (but fully managed).
Express that in the activation-type of the active connection.
Also, no longer use the settings NM_SETTINGS_CONNECTION_FLAGS_VOLATILE
flag to determine whether an assumed connection is "external". These
concepts are entirely orthogonal (although in pratice, external
activations are in-memory and flagged as volatile, but the inverse
is not necessarily true).
Also change match_connection_filter() to consider all connections.
Later, we only call nm_utils_match_connection() for the connection
we want to assume -- which will be a regular settings connection,
not a generated one.
NMPolicy's auto_activate_device() wants to sort by autoconnect-priority,
nm_utils_cmp_connection_by_autoconnect_priority() but fallback to the default
nm_settings_connection_cmp_default(), which includes the timestamp.
Extend nm_settings_connection_cmp_default() to consider the
autoconnect-priority as well. Thus change behavior so that
nm_settings_connection_cmp_default() is the sort order that
auto_activate_device() wants. That makes sense, as
nm_settings_connection_cmp_default() already considered the
ability to autoconnect as first. Hence, it should also honor
the autoconnect priority.
When doing that, rename nm_settings_connection_cmp_default()
to nm_settings_connection_cmp_autoconnect_priority().
Have a proper cmp() function and a wrapper *_p_with_data() that can be
used for g_qsort_with_data().
Thus, establish a naming scheme (*_p_with_data()) for these compare
wrappers that we need all over the place. Note, we also have
nm_strcmp_p_with_data() for the same reason and later more such
functions will follow.
It's not used anymore. Which is a good thing, because if it was used
we'd have to get rid of the uses.
It did accept a whitespace separated string for an argument, which is
never useful for us; it indicated error either on g_spawn_sync()
failure or an error status code of the program spawned, but only set the
error in the former case which had let to errors.
The would would be a bit nicer place without it.
(But not much)
Changes:
- match_device_s390_subchannels_parse() should accept un-initialized
arguments a,b,c, as they are striclty output arguments (without
transfering ownership).
- the output arguments should be set if (and only if) the function
succeeds. That is, move assigning the output arguments to the end.
- increase the BUFSIZE. It's unclear why choosing 10. Probably that
was already sufficient as a subchannel looks like
"0.0.f5f0,0.0.f5f1,0.0.f5f2". Still, increase it to be ample.
If we want to restrict the parsing based on the lenght of the input,
that should be done explicitly (but that seems not desirable).
- use _nm_utils_ascii_str_to_int64() which checks that the range
of the values fits in guint32.
It seems wrong that match_device_s390_subchannels_eval() only compares
the first of up to three subchannels. But leave it as is for now.
(cherry picked from commit 419151a19e)
When searching for "*", we still need to check for higher priority
"except:" matches. But don't duplicate the search loop and just
proceed with the regular searched.
It already has the "if (!except && match == NM_MATCH_SPEC_MATCH)" which
short-cuts the search.
(cherry picked from commit 9fff9f501a)
Instead of passing on invdividual arguments for the match, create
a MatchDeviceData structure and pass it on.
This reduces the number of arguments and extending it later should
be easier. Also, lazily parse the hardware address as needed.
(cherry picked from commit b0aaff86b6)
Previously, we would have different functions like
- nm_match_spec_device_type()
- nm_match_spec_hwaddr()
- nm_match_spec_s390_subchannels()
- nm_match_spec_interface_name()
which all would handle one type of match-spec.
So, to get the overall result whether the arguments
match or not, nm_device_spec_match_list() had to stich
them together and iterate the list multiple times.
Refactor the code to have one nm_match_spec_device()
function that gets all relevant paramters.
The upside is:
- the logic how to evaluate the match-spec is all at one place
(match_device_eval()) instead of spread over multiple
functions.
- It requires iterating the list at most twice. Twice, because
we do a fast pre-search for "*".
One downside could be, that we have to pass all 4 arguments
for the evaluation, even if the might no be needed. That is,
because "nm-core-utils.c" shall be independend from NMDevice, it
cannot receive a device instance to get the parameters as needed.
As we would add new match-types, the argument list would grow.
However, all arguments are cached and fetching them from the
device's private data is very cheap.
(cherry picked from commit b957403efd)
Usecase: when connecting to a public Wi-Fi with MAC address randomization
("wifi.cloned-mac-address=random") you get on every re-connect a new
IP address due to the changing MAC address.
"wifi.cloned-mac-address=stable" is the solution for that. But that
means, every time when reconnecting to this network, the same ID will
be reused. We want an ID that is stable for a while, but at a later
point a new ID should e generated when revisiting the Wi-Fi network.
Extend the stable-id to become dynamic and support templates/substitutions.
Currently supported is "${CONNECTION}", "${BOOT}" and "${RANDOM}".
Any unrecognized pattern is treated verbaim/untranslated.
"$$" is treated special to allow escaping the '$' character. This allows
the user to still embed verbatim '$' characters with the guarantee that
future versions of NetworkManager will still generate the same ID.
Of course, a user could just avoid '$' in the stable-id unless using
it for dynamic substitutions.
Later we might want to add more recognized substitutions. For example, it
could be useful to generate new IDs based on the current time. The ${} syntax
is extendable to support arguments like "${PERIODIC:weekly}".
Also allow "connection.stable-id" to be set as global default value.
Previously that made no sense because the stable-id was static
and is anyway strongly tied to the identity of the connection profile.
Now, with dynamic stable-ids it gets much more useful to specify
a global default.
Note that pre-existing stable-ids don't change and still generate
the same addresses -- unless they contain one of the new ${} patterns.