It's not the job of keyfile writer to enforce certain settings. A
%NULL GBytes property is shall be treated distinct from a byte array
with zero length.
The NMSetting may or may not reject such settings as invalid during
verify() or mangle them during normalize(). But reader/writer should
just serialize every property as-is.
We use write_array_of_uint() for G_TYPE_ARRAY. In practice, only
NMSettingDcb has any properties of this type.
Furthermore, all valid values are either gboolean or guints of
restricted range. Thus, no valid NMSettingDcb should violate the
range check.
Same for reader.
It's really ugly to blindly use uint-list reader for G_TYPE_ARRAY.
Especially, because certain G_TYPE_ARRAY properties of NMSettingDcb
are actually arrays of gboolean, which only ~accidentally~ has the same
memory layout as guint.
No longer use a regex to pre-evaluate whether @tmp_string looks
like a integer list. Instead, parse the integer list ourself.
First, drop the nm_keyfile_plugin_kf_has_key() check.
Note that this merely verifies that such a key exits. It's rather
pointless, because get_bytes() is only called for existing keys.
Still, in case the check would actually yield differing results
from the following nm_keyfile_plugin_kf_get_string(), we want to
act depending on what nm_keyfile_plugin_kf_get_string() returns.
Note that nm_keyfile_plugin_kf_get_string() looks up the key, alternatively
fallback to the settings alias. Then, GKeyFile would parse the raw keyfile
value and return it as string.
Previously, we would first decide whether @tmp_string look like a integer list
to decide wether to parse it via nm_keyfile_plugin_kf_get_integer_list().
But note that it's not clear that nm_keyfile_plugin_kf_get_integer_list()
operates on the same string as nm_keyfile_plugin_kf_get_string().
Could it decide to return different strings based on whether such
a key exists?
E.g. when setting "802-11-wireless.ssid=foo" and "wifi.ssid=60;" they
clearly would yield differing results: "foo" vs. [60].
Ok, probably it is not an issue because we call first
nm_keyfile_plugin_kf_get_string(), decide whether it looks like a
integer list, and return "foo" right away.
This is still confusing and relyies on knowledge about how the value
is encoded as string-list.
Likewise, could our regex determine that the value looks like a integer
list but then the integer list is unable to parse it? Certainly that can
happen for values larger then 255.
Just make it consistent. Get *one* @tmp_string. Try (manually) to
interpret it as string list, or bail using it as plain text.
Also, allow returning empty GBytes arrays. If somebody specifies an
empty list, it's empty. Not NULL.
Previously, due to a bug in "tools/enums-to-docbook.pl", enum values
without explicit numeric value were wrongly parsed. That is fixed,
but still explicitly set the value in the public header.
We used MASTER, BRIDGE and TEAM_MASTER keys for a differnet purpose than the
network.service did, confusing the legacy tooling. Let's do our best to write
compatible configuration files:
* Add *_UUID properties that won't clash with initscripts
* Ignore non-*_UUID keys on read if *_UUID is present
* If the connection.master is an UUID of a connection with a
connection.interface-name, write the uuid into the *_UUID key while setting
the non-*_UUID key to the interface name for compatibility
https://bugzilla.redhat.com/show_bug.cgi?id=1369091
Moving the PPP manager to a separate plugin that is loaded when needed
has the advantage of slightly reducing memory footprint and makes it
possible to install the PPP support only where needed.
https://bugzilla.gnome.org/show_bug.cgi?id=773482
When disabling link autonegotiation and setting speed and duplex manually,
the user is alone: no check is performed against supplied values.
So, the user is supposed to check that the device supports those values.
Explicit this in nm-settings man page.
[fgiudici@redhat.com: added the commit message]
declassify bad combinations of auto-negotiate, duplex and speed
properties values from _VERIFY_NORMALIZABLE_ERRORS to
_VERIFY_NORMALIZABLE. This would preserve compatibility with legacy
nm-connection-editors.
If auto-negotiate is switched off, enforce that both speed and duplex
are set or unset (which would mean "ignore"): if only one is set, reset
both silently and ignore link configuration.
NOTE: changed the default value for auto-negotiate from TRUE to FALSE.
Normalization enforces that no values for speed and duplex are there
when autonegotiation is on. This is required as autoneg on with specific
speed and duplex set means to ethtool to use autonegotiation but
advertise that specific speed and duplex only.
autoneg off, speed 0 and duplex NULL means to ignore link negotiation.
../../libnm-core/nm-vpn-editor-plugin.h:108: warning: Field description for NMVpnEditorPluginInterface::notify_plugin_info_set is missing in source code comment block.
nm_utils_hexstr2bin() is quite similar to hwaddr_aton(), and
nm_utils_bin2hexstr() is similar to _bin2str_buf().
Move them close to each other. Maybe one day they should be
consolidated. But their API is slightly different, so the
consolidation would require some effort.
For now, just move them close to each other, so that it's
clearer that two similar (but distinct) functions exists.
The compiler warns us when we don't specify all enum values
in a switch(), provided that default: is missing.
Make use of that to get a warning when we add a new tunnel mode.
IPv4 already allows setting an address, reusing its prefix for the network
it shares connection with. Additionally, for IPv6, the NDP can also share
the DNS configuration.
_nm_utils_hwaddr_length() did a validation of the string
and returned the length of the address. In all cases where
we were interested in that, we also either want to validate
the address, get the address in binary form, or canonicalize
the address.
We can avoid these duplicate checks, by using _nm_utils_hwaddr_aton()
which both does the parsing and returning the length.
When a global checkpoint is created (one with empty device list) we
save the status of all devices to restore it later. After the
checkpoint new interfaces and connections may appear and they can
significantly influence the overall networking status, but we don't
consider them at the moment.
Introduce a new flag DELETE_NEW_CONNECTIONS to delete any connection
added after the checkpoint and similarly a DISCONNECT_NEW_DEVICES to
ensure that the connection active on newly appeared devices doesn't
disrupt network connectivity.
https://bugzilla.redhat.com/show_bug.cgi?id=1378393