Add new API to allow passing both IPv4 and IPv6 configuration
information from VPN plugins to the backend.
Now instead of a single Ip4Config, a plugin has Config, Ip4Config, and
Ip6Config. "Config" contains information which is neither IPv4 nor
IPv6 specific, and also indicates which of Ip4Config and Ip6Config are
present. Ip4Config now only contains the IPv4-specific bits of
configuration.
There is backward compatibility in both directions: if the daemon is
new and the VPN plugin is old, then NM will notice that the plugin
emitted the Ip4Config signal without having emitted the Config signal
first, and so will assume that it is IPv4-only, and that the generic
bits of configuration have been included with the Ip4Config. If the
daemon is old and the plugin is new, then NMVPNPlugin will copy the
values from the generic config into the IPv4 config as well. (In fact,
NMVPNPlugin *always* does this, because it's harmless, and it's easier
than actually checking the daemon version.)
Currently the VPN is still configured all-at-once, after both IPv4 and
IPv6 information has been received, but the APIs allow for the
possibility of configuring them one at a time in the future.
Even if a VPN is only tunneling IPv4, you might still be connected to
the tunnel endpoint via IPv6. Allow
NM_VPN_PLUGIN_IP4_CONFIG_EXT_GATEWAY to be either an IPv4 or an IPv6
address, and set up an appropriate static route either way.
Commit 217c5bf6ac fixed processing of unix
signals: signals are blocked in all threads and a dedicated thread handles the
signals using sigwait().
However, the commit forgot that child processes inherit signal mask as well.
That is why we have to unblock signals for child processes we spawn from NM, so
that they can receive signals.
When NM was registering all of its enum types by hand, it was using
NamesLikeThis rather than the default names-like-this for the "nick"
values. When we switched to using glib-mkenums, this resulted in
dbus-glib using different strings for the D-Bus error names, causing
compatibility problems.
Fix this by using glib-mkenums annotations to manually fix all the
enum values back to what they were before. (This can't be done in a
more automated way, because the old names aren't 100% consistent. Eg,
"UNKNOWN" frequently becomes "UnknownError" rather than just
"Unknown".)
We already have the master device kept in the active connection, so
we can just use that instead of having the Policy determine and set
it manually. This also should allow slaves to auto-activate their
master connections if the master is able to activate.
They are the basic class that tracks active connections, and we're
going to use them for connection dependencies. So use the fact that
both NMVPNConnection and NMActRequest have the same base class
instead of using object paths.
Rather than generating enum classes by hand (and complaining in each
file that "this should really be standard"), use glib-mkenums.
Unfortunately, we need a very new version of glib-mkenums in order to
deal with NM's naming conventions and to fix a few other bugs, so just
import that into the source tree temporarily.
Also, to simplify the use of glib-mkenums, import Makefile.glib from
https://bugzilla.gnome.org/654395.
To avoid having to run glib-mkenums for every subdirectory of src/,
add a new "generated" directory, and put the generated enums files
there.
Finally, use Makefile.glib for marshallers too, and generate separate
ones for libnm-glib and NetworkManager.
That was always the goal, but never got there. This time we need it
for real to abstract handling of dependent connections so bite the
bullet and make it happen.
Adds a new "master" property to NMActiveConnection containing the path
of the master NMDevice if the connection has a master.
Signed-off-by: Thomas Graf <tgraf@redhat.com>
Active VPN connections exported their own active path instead of active path of
base connection in 'SpecificObject' property. It's a regression caused by commit
bc6fc7b910 that split VPN connections to
NMVPNConnectionBase and NMVPNConnection.
Previously, specific object used to be obtained from NMActRequest of parent
connection. The NMActRequest object served also for getting secrets. Commits
0e6a5365d4 and 832e64f8bc
removed NMActRequest from VPN connection because it's not necessary any more.
This commit fixes the issue by passing specific object path explicitly.
The core problem was the nm_connection_need_secrets() call in
nm-agent-manager.c's get_start() function; for VPN settings this
always returns TRUE. Thus if a VPN connection had only system
secrets, when the agent manager checked if additional secrets
were required, they would be, and agents would be asked for
secrets they didn't have and couldn't provide. Thus the
connection would fail. nm_connection_need_secrets() simply
can't know if VPN secrets are really required because it
doesn't know anything about the internal VPN private data;
only the plugin itself can tell us if secrets are required.
If the system secrets are sufficient we shouldn't be asking any
agents for secrets at all. So implement a three-step secrets
path for VPN connections. First we retrieve existing system
secrets, and ask the plugin if these are sufficient. Second we
request both existing system secrets and existing agent secrets
and again ask the plugin if these are sufficient. If both those
fail, we ask agents for new secrets.
For VPN connections, the interface name would be that of the VPN's
IP interface, but the script environment would be the that of the
VPN's parent device. Enhance the environment by adding any VPN
specific details as additional environment variables prefixed by
"VPN_". Leave the existing environment setup intact for backwards
compatiblity.
Additionally, the dispatcher never got updated for IPv6 support,
so push IPv6 configuration and DHCPv6 configuration into the
environment too.
Even better, push everything the dispatcher needs to it instead
of making the dispatcher make D-Bus requests back to NM, which
sometimes fails if NM has already torn down the device or the
connection which the device was using.
And add some testcases to ensure that we don't break backwards compat,
the testcases here were grabbed from a 0.8.4 machine with a hacked up
dispatcher to dump everything it was given from NM.
A convenience so that clients which might key certain operations off
which connections are active (checking work mail only when on VPN for
example) can more easily get which connections are active. This would
allow those apps to store the UUID (which they would already be doing)
and not have to create a Connection proxy and then get the connection
properties just to retrieve the UUID of the connection. Instead they
can now get it from GetAll of the ActiveConnection object, which they
would already be doing.
Two problems here:
1) code that called nm_vpn_service_get_active_connections() wasn't freeing
the returned list, leaking it
2) No real reason to reference each item in the returned list in
nm_vpn_manager_get_active_connections(), it just makes it easier to
forget to unref things later
We're already connected; shouldn't need secrets again but
if we do, we'll ask for them again. Fixes an issue where
reconnect would use an old one-time-password.
It's the thing that owns the secrets anyway, and it simplifies things to
have the secrets handling there instead of half in NMActRequest and
half in NMManager. It also means we can get rid of the ugly signals
that NMSettingsConnection had to emit to get agent's secrets, and
we can consolidate the requests for the persistent secrets that the
NMSettingsConnection owned into NMSettingsConnection itself instead
of also in NMAgentManager.
Since the NMActRequest and the NMVPNConnection classes already tracked
the underlying NMSettingsConnection representing the activation, its
trivial to just have them ask the NMSettingsConnection for secrets
instead of talking to the NMAgentManager. Thus, only the
NMSettingsConnection now has to know about the agent manager, and it
presents a cleaner interface to other objects further up the chain,
instead of having bits of the secrets request splattered around the
activation request, the VPN connection, the NMManager, etc.
When a user makes an explicit request for secrets via GetSecrets
or activates a device, don't ask other users' agents for secrets.
Restrict secrets request to agents owned by the user that made the
initial activate or GetSecrets request.
Automatic activations still request secrets from any available agent.
Simplifies code internally, and makes it easier for clients as well in
some cases where they want to control what ends up in the resulting
hash and what does not.
Due to limitations in dbus-glib, where one GObject cannot have more
than one introspection XML object attached to it, we used to include
more than one <interface> in the VPNConnection object introspection
XML. This was suboptimal for two reasons:
1) it duplicated the Connection.Active introspection XML which
made it harder for clients to use the introspection data in a
dynamic fashion, besides looking ugly in the docs
2) not many other programs use this feature of dbus-glib, which
means it didn't get a lot of testing, and broke, which sucks
for NM.
To fix this issue, create a base class for NMVpnConnection that
handles the Connection.Active API, and make NMVpnConnection itself
handle just the VPN pieces that it layers on top. This makes
dbus-glib happy because we aren't using two <interface> blocks
in the same introspection XML, and it makes the NM code more
robust because we can re-use the existing Connection.Active
introspection XML in the NMVpnConnectionBase class.
Filter registered agents for each secrets request to ensure that the
connection for which secrets are requested is visible to that agent,
and add that agent to the queue. Ask each agent in the queue until
one returns usable secrets. Ensure that if new agents register
or existing agents quit during the secrets request, that the queue
is updated accordingly, and ensure that an agent that's already
been asked for secrets, unregisters, and re-registers before the
secrets request is comple, isn't asked for secrets twice.
Instead of a bizare mechanism of signals back to the manager
object that used to be required because of the user/system settings
split, let each place that needs secrets request those secrets
itself. This flattens the secrets request process a ton and
the code flow significantly.
Previously the get secrets flow was something like this:
nm_act_request_get_secrets ()
nm_secrets_provider_interface_get_secrets ()
emits manager-get-secrets signal
provider_get_secerts ()
system_get_secrets ()
system_get_secrets_idle_cb ()
nm_sysconfig_connection_get_secrets ()
system_get_secrets_reply_cb ()
nm_secrets_provider_interface_get_secrets_result ()
signal failure or success
now instead we do something like this:
nm_agent_manager_get_secrets ()
nm_agent_manager_get_secrets ()
request_start_secrets ()
nm_sysconfig_connection_get_secrets ()
return failure or success to callback