add function nm_modem_manager_has_modem_for_iface to modem-manager api
and ignore device additions in nm-manager if the iface is claimed by
modem-manager; also forget about already managed devices once they get
claimed by modem-manager.
Since NM adds the gateway host route in the manner that's correct
for the current routing situation, we don't really want random
gateway host routes from the VPN server getting added instead.
For IPv6 autoconf, the addresses returned from NMIP6Manager will
already have been added to the interface, and if we remove and readd
them it will cause additional netlink notifications which may cause us
to think additional changes have been made. So change the
config-applying code to only remove addresses that aren't part of the
new config.
Automatic IPv6 configuration is handled by the kernel, but to
integrate it properly with NetworkManager, we need to watch what the
kernel does to see whether or not it was successful (so that we can
let the user know if there is no IPv6 router present, for example).
NMIP6Manager takes care of this.
rtnl_addr requires that all addresses have the "peer" attribute set in
order to be compared for equality, but this attribute is not normally
set. As a result, most addresses will not compare as equal even to
themselves, busting caching. We fix this for now by poking into the
guts of libnl if it is broken...
Since the new PolicyKit does away with easy checking of authorizations,
we get to implement it by ourselves, but that's OK since we can actually
use it for a lot more stuff. So add the GetPermissions call which returns
the permissions the caller actually has, and a signal informing callers
that their permissions might have changed. Hook this all up to
PolicyKit so it's useful.
With the addition of IPv6, both v4 and v6 configuration are run in
parallel, and when both have finished, then activation can proceed.
Unfortunately, two of the 3 users of PPP (PPPoE and 3G) ran PPP at
stage2, and when the PPP IPv4 config was received, jumped directly
to activation stage4. That caused the IPv6 code never to run, and
thus we hung at stage4 waiting for it to complete when nothing had
started it in the first place.
Instead, move PPP to stage3 so that
nm_device_activate_stage3_ip_config_start() can kick off both v4
and v6 IP code and we can successfully complete IP configuration
in all cases. PPP previously being in stage2 was an artifact of
the more simplistic pre-IPv6 configuration code where it didn't
matter if you skipped stage3.
instead of an NMSettingsConnectionInterface, because we won't always have an
object that implements NMSettingsConnectionInterface. Plus, since NMConnection
is a prerequisite of NMSettingsConnectionInterface, the NMConnection will
always be there anyway.
Make NMSettingsService implement most of the NMSettingsInterface
API to make subclasses simpler, and consolidate exporting of
NMExportedConnection subclasses in NMSettingsService instead of
in 3 places. Make NMSysconfigSettings a subclass of
NMSettingsService and save a ton of code.
Mark activation requests that contain connections to be assumed,
and use that to short-circuit various parts of the activation
process by not touching various device attributes, since they
are already set up. Also ensure the device is not deactivated
when it initially becomes managed, because that would kill the
connection we are about to assume.
The old NMExportedConnection was used for both client and server-side classes,
which was a mistake and made the code very complicated to follow. Additionally,
all PolicyKit operations were synchronous, and PK operations can block for a
long time (ie for user input) before returning, so they need to be async. But
NMExportedConnection and NMSysconfigConnection didn't allow for async PK ops
at all.
Use this opportunity to clean up the mess and create GInterfaces that both
server and client objects implement, so that the connection editor and applet
can operate on generic objects like they did before (using the interfaces) but
can perform specific operations (like async PK verification of callers) depending
on whether they are local or remote or whatever.
This allows a device (or a companion) to signal that it is not a good
time for a specific device to autoconnect to a network.
The OLPC mesh device will use this to prevent automatic connection
to WLAN networks while the mesh device is active.
Like the OLPC mesh interface, which uses the same actual MAC & radio
as the OLPC wifi device, and thus when mesh is active the wifi
shouldn't be scanning.