Note the special error codes NM_UTILS_ERROR_CONNECTION_AVAILABLE_*.
This will be used to determine, whether the profile is fundamentally
incompatible with the device, or whether just some other properties
mismatch. That information will be importand during a plain `nmcli
connection up`, where NetworkManager searches all devices for a device
to activate. If no device is found (and multiple errors happened),
we want to show the error that is most likely relevant for the user.
Also note, how NMDevice's check_connection_compatible() uses the new
class field "device_class->connection_type_check_compatible" to simplify
checks for compatible profiles.
The error reason is still unused.
It seems to me the NM_DEVICE_CLASS_DECLARE_TYPES() macro confuses more
than helping. Let's explicitly initialize the two fields, albeit with
another helper macro NM_DEVICE_DEFINE_LINK_TYPES() to get the list of
link-types right.
For consistency, also leave nop-lines like
device_class->connection_type_supported = NULL;
device_class->link_types = NM_DEVICE_DEFINE_LINK_TYPES ();
because all NMDevice class init methods should have this same
boiler plate code and to make it explicit that this is intended.
And there are only 3 occurences where this actually comes into play.
The majority of device implementations name their parent-class variable
"device_class". That also makes more sense as it is more consistant.
E.g. "parent" sounds like it's the direct parent, but that is not
the crucial point here. The crucial point at this place, is that we
access the NMDeviceClass typed pointer. Rename.
Previously, there were two functions nm_ppp_manager_stop_sync() and
nm_ppp_manager_stop_async().
However, stop-sync() would still kill the process asynchronously (with a
2 seconds timeout before sending SIGKILL).
On the other hand, stop-async() did pretty much the same thing as
sync-code, except also using the GAsyncResult.
Merge the two functions. Stopping the instance for the most part can be
done entirely synchrnous. The only thing that is asynchronous, is
to wait for the process to terminate. For that, add a new callback
argument to nm_ppp_manager_stop(). This replaces the GAsyncResult
pattern.
Also, always ensure that NetworkManager runs the mainloop at least as
long until the process really terminated. Currently we don't get that
right, and during shutdown we just stop iterating the mainloop. However,
fix this from point of view of NMPPPManager and register a wait-object,
that later will correctly delay shutdown.
Also, NMDeviceWwan cared to wait (asynchronously) until pppd really
terminated. Keep that functionality. nm_ppp_manager_stop() returns
a handle that can be used to cancel the asynchrounous request and invoke
the callback right away. However note, that even when cancelling the
request, the wait-object that prevents shutdown of NetworkManager is
kept around, so that we can be sure to properly clean up.
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
If IPV6CP terminates before IPCP, pppd enters the RUNNING phase and we
start IP configuration without having an IP interface set, which
triggers assertions.
Instead, add a SetIfindex() D-Bus method that gets called by the
plugin when pppd becomes RUNNING. The method sets the IP ifindex of
the device and starts IP configuration.
https://bugzilla.redhat.com/show_bug.cgi?id=1515829
Instead of having 3 properties @gateway, @never_default and @has_gateway
on NMIP4Config/NMIP6Config that determine the default-route, track the
default-route as a regular route.
The gateway setting is the configuration knob for the default-route.
Since an NMIP4Config/NMIP6Config instance only has one gateway property,
it cannot track more then one default-routes (see related bug rh#1445417).
Especially with policy routing, it might be interesting to configure a
default-route in multiple tables.
Also, later it might be interesting to allow adding default-routes as
regular static routes in a connection, so that the user can configure additional
route parameters for the default-route or add default-routes in multiple tables.
With this patch, default-routes now have a rt_source property according to their
origin.
Also, the previous commits of this branch broke handling of the
default-route :) . That should be working now again.
At startup the manager tries to create virtual devices without a
specific order and spits warnings when a device can't be realized
because the parent device is not yet created. These failures are not
something the user should worry about because the creation will be
retried when the parent appears.
A better approach is to return an error code from the device's
create_and_realize() telling that it failed because the parent doesn't
exist. In this way, the manager knows that the device isn't ready and
can avoid printing warning messages.
Previously when the interface created by pppd was already the one we
expected, we would rename it to itself and remove the device from the
manager. Don't do it.
Fixes: 6c3195931e
Add code to NMPppDevice to activate new-style PPPoE connections. This
is a bit tricky because we can't create the link as usual in
create_and_realize(). Instead, we create a device without ifindex and
start pppd in stage2; when pppd reports a new configuration, we rename
the platform link to the correct name and set the ifindex into the
device.
This mechanism is inherently racy, but there is no way to tell pppd to
create an arbitrary interface name.
The new device type represents a PPP interface, and will implement the
activation of new-style PPPoE connections, i.e. the ones that don't
claim the parent device.