1fd11bd8d1 consolidated VPN connection
state handling, but that had the effect of making vpn_cleanup() be
called after all other handlers had processed the VPN connection
state change. This meant that the code in vpn_cleanup() that
reapplies the parent device's IP configs ran last, and that code
flushes routes on the device before reapplying them. Since the
policy is a listener on the VPN state change signals, it was
running the default routing updates before vpn_cleanup() got run,
resulting in vpn_cleanup()'s calls to nm_system_apply_ip4_config()
and nm_system_apply_ip6_config() blowing the default route away
which the policy had just set.
Fix that by moving the VPN routing cleanups into the policy, where
most of the routing decisions currently live, causing them to be
run before the default route is fixed up.
The patches that reduced the frequency of changes to /etc/resolv.conf
failed to prefer the VPN DNS information. Even though a VPN may not
be allowed to receive the default route, its DNS information still
needs to be higher priority than interface DNS info, otherwise no
sites on the VPN will be accessible due to glibc's in-order querying
of entries in /etc/resolv.conf.
Consolidate all the DNS handling in the policy instead of sprinkling
it around in the device and vpn code. This allows us to batch the
updates and thus reduce the number of times resolv.conf needs to
be written. It's also easier to follow when and why the changes
occur.
They won't always be updated together; if the hostname changes we
don't need to update routing, and if new routes show up we don't
need to update DNS. This also makes it a lot clearer what's
going on in the routing and DNS update functions.
Bug #676317 describes the following error:
NetworkManager[30151]: <error> [1337348764.559121] [nm-system.c:1121]
nm_system_replace_default_ip6_route(): (eth1): failed to set IPv6 default
route: -7
The above error is caused by NetworkManager assuming default gateways
belong to addresses but failing to setup default gateways for addresses
learned through DHCPv6.
This commit doesn't fix the fundamental issue but can be viewed as an ugly
workaround that gets IPv6 connection up and running. It doesn't fix
the fundamental flaw of binding gateways to IP addresses. They are
configured separately in IPv6 and NM should use lifetimes and allow
default gateway reconfiguration.
Add new API to allow passing both IPv4 and IPv6 configuration
information from VPN plugins to the backend.
Now instead of a single Ip4Config, a plugin has Config, Ip4Config, and
Ip6Config. "Config" contains information which is neither IPv4 nor
IPv6 specific, and also indicates which of Ip4Config and Ip6Config are
present. Ip4Config now only contains the IPv4-specific bits of
configuration.
There is backward compatibility in both directions: if the daemon is
new and the VPN plugin is old, then NM will notice that the plugin
emitted the Ip4Config signal without having emitted the Config signal
first, and so will assume that it is IPv4-only, and that the generic
bits of configuration have been included with the Ip4Config. If the
daemon is old and the plugin is new, then NMVPNPlugin will copy the
values from the generic config into the IPv4 config as well. (In fact,
NMVPNPlugin *always* does this, because it's harmless, and it's easier
than actually checking the daemon version.)
Currently the VPN is still configured all-at-once, after both IPv4 and
IPv6 information has been received, but the APIs allow for the
possibility of configuring them one at a time in the future.
It is bound to autoconnect_inhibit private variable (has opposite meaning).
While 'Autoconnect' is TRUE (default value) the device can automatically
activate a connection. If it is changed to FALSE, the device will not
auto-activate until 'Autoconnect' is TRUE again.
Disconnect() method sets 'Autoconnect' to FALSE. NMPolicy monitors the property
and schedules auto activation when FALSE->TRUE transition is made.
We need to set the interface's firewall zone before we kick off
any sort of IP configuration, so that rules for stuff like
DHCP are already handled by the time that these services are started.
When we want to change the zone an interface belongs to
we can't use firewalld's addInterface() because this one
doesn't allow to add interface to zone when it already
has been part of some other/same zone.
We need to use changeZone() method instead - hopefuly
this is the final name of this method.
We already have the master device kept in the active connection, so
we can just use that instead of having the Policy determine and set
it manually. This also should allow slaves to auto-activate their
master connections if the master is able to activate.
That was always the goal, but never got there. This time we need it
for real to abstract handling of dependent connections so bite the
bullet and make it happen.
Reset the auto retries of all slave connections when their master
connection enter prepare state and schedule all of the slaves
for activation if not pending yet.
Slaves are initially scheduled for activation together with their
master but depending on how long it takes for the master
connection to appear the slave activation requests may already
have run out of attempts. Resetting the retries counter ensures
that all slaves are properly activated when a master is brought up.
Signed-off-by: Thomas Graf <tgraf@redhat.com>
Currently slaves only wait for the master device to be present. This is
insufficient, we want to wait for the master connection to be activated.
Signed-off-by: Thomas Graf <tgraf@redhat.com>
For a slave to be activatetable the master connection must be present.
Activation of the slave is postponed until this condition is met.
Once the slave is being activated, a reference to the master connection
is acquired and held for the lifetime of the bond.
Changes v2:
- Made check_master_dependency() return TRUE/FALSE
Signed-off-by: Thomas Graf <tgraf@redhat.com>
This commit fixes a regression introduced by commit
6272052f9d.
dhclient prefixes options with "new_", however we remove that prefix
before putting options into NMDHCP4Config.
Normally, a device disabled via nm_device_interface_set_enabled() will
shift into the UNAVAILABLE state. Modems, however, don't do that.
Rather, they pretend that they are in the DISCONNECTED state, presumably
to make it easier to re-enable them. To avoid accidentally re-enabling
and autoconnecting a disabled modem, we need to explicitly make
nm_device_interface_get_enabled() == true a prerequisite for
autoconnecting.
If a shared wifi connection is restricted to a certain set of users
and none of those users have authorization to start shared wifi
connections, don't auto-start the connection.
Use case:
A user has an auto-activatable connection with secrets in a keyring. While
booting NM starts and tries to activate the connection, but it fails because of
missing secrets. Then the user logs in, but the connection is marked as invalid
and is not tried again.
This commit solves the issue by removing invalid flag and activating the
connection when a secret agent registers.
Signed-off-by: Jiří Klimeš <jklimes@redhat.com>
If there is a temporary connection failure (e.g. due to unavailable DHCP), the
connection is marked as invalid after several retries. Reset the flag after
5 mins to allow next auto-reconnection.
'vperic' had an interesting problem on IRC where every 10 minutes
the ethernet would change state from ACTIVATED -> DISCONNECTED with
a reason code of 0; the only thing I can find is that something was
telling NM to activate a connection periodically, becasue that appears
to be the only place that changes state to DISCONNECTED with a
reason code of 0. No logging; no apparent carrier changes.
So log this condition just in case we run into it later.
Retries counter was not initialized when connections were loaded. That forced
the counter to start from -1 and continue decreasing on connection failures.
And connection attempts never stopped.
For VPN connections, the interface name would be that of the VPN's
IP interface, but the script environment would be the that of the
VPN's parent device. Enhance the environment by adding any VPN
specific details as additional environment variables prefixed by
"VPN_". Leave the existing environment setup intact for backwards
compatiblity.
Additionally, the dispatcher never got updated for IPv6 support,
so push IPv6 configuration and DHCPv6 configuration into the
environment too.
Even better, push everything the dispatcher needs to it instead
of making the dispatcher make D-Bus requests back to NM, which
sometimes fails if NM has already torn down the device or the
connection which the device was using.
And add some testcases to ensure that we don't break backwards compat,
the testcases here were grabbed from a 0.8.4 machine with a hacked up
dispatcher to dump everything it was given from NM.