2019-09-10 11:19:01 +02:00
// SPDX-License-Identifier: GPL-2.0+
2004-06-24 14:18:37 +00:00
/* NetworkManager -- Network link manager
*
2017-03-20 13:36:00 +00:00
* Copyright ( C ) 2004 - 2017 Red Hat , Inc .
2008-11-03 04:13:42 +00:00
* Copyright ( C ) 2005 - 2008 Novell , Inc .
2004-06-24 14:18:37 +00:00
*/
2016-02-19 14:57:48 +01:00
# include "nm-default.h"
2014-11-13 10:07:02 -05:00
2004-06-24 14:18:37 +00:00
# include <getopt.h>
2011-12-20 23:38:02 +01:00
# include <locale.h>
2004-06-24 14:18:37 +00:00
# include <stdlib.h>
2004-12-21 06:49:21 +00:00
# include <signal.h>
2004-06-24 14:18:37 +00:00
# include <unistd.h>
# include <fcntl.h>
# include <sys/stat.h>
2004-10-15 15:59:25 +00:00
# include <sys/types.h>
2014-03-06 22:04:44 +01:00
# include <sys/resource.h>
2004-06-24 14:18:37 +00:00
2016-03-11 10:25:40 +01:00
# include "main-utils.h"
2014-07-05 16:23:30 -04:00
# include "nm-dbus-interface.h"
2004-06-24 14:18:37 +00:00
# include "NetworkManagerUtils.h"
2007-02-08 15:34:26 +00:00
# include "nm-manager.h"
2016-11-21 00:43:52 +01:00
# include "platform/nm-linux-platform.h"
2018-03-02 05:55:21 +01:00
# include "nm-dbus-manager.h"
2016-11-21 00:43:52 +01:00
# include "devices/nm-device.h"
# include "dhcp/nm-dhcp-manager.h"
2011-09-22 10:16:07 -05:00
# include "nm-config.h"
2013-11-05 14:43:31 -05:00
# include "nm-session-monitor.h"
2014-05-14 15:50:33 -05:00
# include "nm-dispatcher.h"
2016-11-21 00:43:52 +01:00
# include "settings/nm-settings.h"
2014-08-14 13:34:57 +02:00
# include "nm-auth-manager.h"
2014-11-14 11:45:51 -05:00
# include "nm-core-internal.h"
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
# include "nm-dbus-object.h"
2017-03-20 13:36:00 +00:00
# include "nm-connectivity.h"
2017-02-04 20:42:42 +01:00
# include "dns/nm-dns-manager.h"
2016-09-27 13:35:29 +02:00
# include "systemd/nm-sd.h"
2017-04-17 18:40:52 +02:00
# include "nm-netns.h"
2004-06-24 14:18:37 +00:00
2010-04-23 12:12:47 -07:00
# if !defined(NM_DIST_VERSION)
# define NM_DIST_VERSION VERSION
# endif
2012-08-23 11:53:41 +02:00
# define NM_DEFAULT_PID_FILE NMRUNDIR " / NetworkManager.pid"
2006-02-27 20:06:24 +00:00
2017-02-17 10:53:33 +01:00
# define CONFIG_ATOMIC_SECTION_PREFIXES ((char **) NULL)
2007-02-19 13:09:32 +00:00
static GMainLoop * main_loop = NULL ;
2015-03-11 08:23:23 -05:00
static gboolean configure_and_quit = FALSE ;
2006-02-27 05:50:28 +00:00
2015-03-13 19:59:32 +01:00
static struct {
gboolean show_version ;
2015-11-20 17:52:54 +01:00
gboolean print_config ;
2015-03-13 19:59:32 +01:00
gboolean become_daemon ;
gboolean g_fatal_warnings ;
gboolean run_from_build_dir ;
char * opt_log_level ;
char * opt_log_domains ;
char * pidfile ;
} global_opt = {
. become_daemon = TRUE ,
} ;
2015-03-31 07:41:56 +02:00
static void
2015-06-10 10:01:49 +02:00
_set_g_fatal_warnings ( void )
2015-03-31 07:41:56 +02:00
{
GLogLevelFlags fatal_mask ;
fatal_mask = g_log_set_always_fatal ( G_LOG_FATAL_MASK ) ;
fatal_mask | = G_LOG_LEVEL_WARNING | G_LOG_LEVEL_CRITICAL ;
g_log_set_always_fatal ( fatal_mask ) ;
}
2014-03-06 22:04:44 +01:00
static void
2016-11-25 15:34:40 +01:00
_init_nm_debug ( NMConfig * config )
2014-03-06 22:04:44 +01:00
{
2016-11-25 20:16:05 +01:00
gs_free char * debug = NULL ;
2017-12-12 15:49:39 +01:00
enum {
D_RLIMIT_CORE = ( 1 < < 0 ) ,
D_FATAL_WARNINGS = ( 1 < < 1 ) ,
} ;
2014-03-06 22:04:44 +01:00
GDebugKey keys [ ] = {
{ " RLIMIT_CORE " , D_RLIMIT_CORE } ,
2015-03-31 07:41:56 +02:00
{ " fatal-warnings " , D_FATAL_WARNINGS } ,
2014-03-06 22:04:44 +01:00
} ;
2016-02-13 16:08:40 +01:00
guint flags ;
2014-03-06 22:04:44 +01:00
const char * env = getenv ( " NM_DEBUG " ) ;
2016-11-25 15:34:40 +01:00
debug = nm_config_data_get_value ( nm_config_get_data_orig ( config ) ,
NM_CONFIG_KEYFILE_GROUP_MAIN ,
NM_CONFIG_KEYFILE_KEY_MAIN_DEBUG ,
NM_MANAGER_RELOAD_FLAGS_NONE ) ;
2014-03-06 22:04:44 +01:00
2016-02-13 16:08:40 +01:00
flags = nm_utils_parse_debug_string ( env , keys , G_N_ELEMENTS ( keys ) ) ;
flags | = nm_utils_parse_debug_string ( debug , keys , G_N_ELEMENTS ( keys ) ) ;
2014-03-06 22:04:44 +01:00
2017-05-24 13:47:58 +02:00
# if ! defined (__SANITIZE_ADDRESS__)
2015-03-31 07:41:56 +02:00
if ( NM_FLAGS_HAS ( flags , D_RLIMIT_CORE ) ) {
2014-03-06 22:04:44 +01:00
/* only enable this, if explicitly requested, because it might
* expose sensitive data . */
struct rlimit limit = {
. rlim_cur = RLIM_INFINITY ,
. rlim_max = RLIM_INFINITY ,
} ;
setrlimit ( RLIMIT_CORE , & limit ) ;
}
2017-05-24 13:47:58 +02:00
# endif
2015-03-31 07:41:56 +02:00
if ( NM_FLAGS_HAS ( flags , D_FATAL_WARNINGS ) )
_set_g_fatal_warnings ( ) ;
2014-03-06 22:04:44 +01:00
}
2014-07-09 18:36:20 +02:00
void
2015-06-25 21:10:32 +02:00
nm_main_config_reload ( int signal )
2014-07-09 18:36:20 +02:00
{
config: refactor change-flags to be a cause/reason which triggered the change
For the most part, this patch just renames some change-flags, but
doesn't change much about them. The new name should better express
what they are.
A config-change signal can be emitted for different reasons:
when we receive a signal (SIGHUP, SIGUSR1, SIGUSR2) or for internal
reasons like resetting of no-auto-default or setting internal
values.
Depending on the reason, we want to perform different actions.
For example:
- we reload the configuration from disk on SIGHUP, but not for
SIGUSR1.
- For SIGUSR1 and SIGHUP, we want to update-dns, but not for SIGUSR2.
Another part of the change-flags encodes which part of the configuration
actually changed. Often, these parts can only change when re-reading
from disk (e.g. a SIGUSR1 will not change any configuration inside
NMConfig).
Later, we will have more causes, and accordingly more fine-grained
effects of what should be done on reload.
2016-05-30 14:53:27 +02:00
NMConfigChangeFlags reload_flags ;
switch ( signal ) {
case SIGHUP :
reload_flags = NM_CONFIG_CHANGE_CAUSE_SIGHUP ;
break ;
case SIGUSR1 :
reload_flags = NM_CONFIG_CHANGE_CAUSE_SIGUSR1 ;
break ;
case SIGUSR2 :
reload_flags = NM_CONFIG_CHANGE_CAUSE_SIGUSR2 ;
break ;
default :
g_return_if_reached ( ) ;
}
2015-06-25 21:10:32 +02:00
nm_log_info ( LOGD_CORE , " reload configuration (signal %s)... " , strsignal ( signal ) ) ;
config: refactor change-flags to be a cause/reason which triggered the change
For the most part, this patch just renames some change-flags, but
doesn't change much about them. The new name should better express
what they are.
A config-change signal can be emitted for different reasons:
when we receive a signal (SIGHUP, SIGUSR1, SIGUSR2) or for internal
reasons like resetting of no-auto-default or setting internal
values.
Depending on the reason, we want to perform different actions.
For example:
- we reload the configuration from disk on SIGHUP, but not for
SIGUSR1.
- For SIGUSR1 and SIGHUP, we want to update-dns, but not for SIGUSR2.
Another part of the change-flags encodes which part of the configuration
actually changed. Often, these parts can only change when re-reading
from disk (e.g. a SIGUSR1 will not change any configuration inside
NMConfig).
Later, we will have more causes, and accordingly more fine-grained
effects of what should be done on reload.
2016-05-30 14:53:27 +02:00
2014-07-09 18:54:47 +02:00
/* The signal handler thread is only installed after
* creating NMConfig instance , and on shut down we
* no longer run the mainloop ( to reach this point ) .
*
* Hence , a NMConfig singleton instance must always be
* available . */
2018-11-09 18:06:32 +01:00
nm_config_reload ( nm_config_get ( ) , reload_flags , TRUE ) ;
2014-07-09 18:36:20 +02:00
}
2014-04-02 12:41:04 -05:00
static void
2014-10-29 09:12:18 -05:00
manager_configure_quit ( NMManager * manager , gpointer user_data )
2014-04-02 12:41:04 -05:00
{
2014-10-29 09:12:18 -05:00
nm_log_info ( LOGD_CORE , " quitting now that startup is complete " ) ;
g_main_loop_quit ( main_loop ) ;
2015-03-11 08:23:23 -05:00
configure_and_quit = TRUE ;
2014-04-02 12:41:04 -05:00
}
2015-11-20 17:52:54 +01:00
static int
print_config ( NMConfigCmdLineOptions * config_cli )
{
gs_unref_object NMConfig * config = NULL ;
gs_free_error GError * error = NULL ;
NMConfigData * config_data ;
nm_logging_setup ( " OFF " , " ALL " , NULL , NULL ) ;
2017-02-17 10:53:33 +01:00
config = nm_config_new ( config_cli , CONFIG_ATOMIC_SECTION_PREFIXES , & error ) ;
2015-11-20 17:52:54 +01:00
if ( config = = NULL ) {
2016-02-28 16:25:36 +01:00
fprintf ( stderr , _ ( " Failed to read configuration: %s \n " ) , error - > message ) ;
2015-11-20 17:52:54 +01:00
return 7 ;
}
config_data = nm_config_get_data ( config ) ;
fprintf ( stdout , " # NetworkManager configuration: %s \n " , nm_config_data_get_config_description ( config_data ) ) ;
nm_config_data_log ( config_data , " " , " " , stdout ) ;
return 0 ;
}
2015-03-13 20:09:46 +01:00
static void
do_early_setup ( int * argc , char * * argv [ ] , NMConfigCmdLineOptions * config_cli )
2004-06-24 14:18:37 +00:00
{
2006-10-13 19:41:47 +00:00
GOptionEntry options [ ] = {
2015-03-13 19:59:32 +01:00
{ " version " , ' V ' , 0 , G_OPTION_ARG_NONE , & global_opt . show_version , N_ ( " Print NetworkManager version and exit " ) , NULL } ,
{ " no-daemon " , ' n ' , G_OPTION_FLAG_REVERSE , G_OPTION_ARG_NONE , & global_opt . become_daemon , N_ ( " Don't become a daemon " ) , NULL } ,
{ " log-level " , 0 , 0 , G_OPTION_ARG_STRING , & global_opt . opt_log_level , N_ ( " Log level: one of [%s] " ) , " INFO " } ,
{ " log-domains " , 0 , 0 , G_OPTION_ARG_STRING , & global_opt . opt_log_domains ,
2013-06-14 14:51:04 -04:00
N_ ( " Log domains separated by ',': any combination of [%s] " ) ,
" PLATFORM,RFKILL,WIFI " } ,
2015-03-13 19:59:32 +01:00
{ " g-fatal-warnings " , 0 , 0 , G_OPTION_ARG_NONE , & global_opt . g_fatal_warnings , N_ ( " Make all warnings fatal " ) , NULL } ,
2017-04-20 10:47:58 +02:00
{ " pid-file " , ' p ' , 0 , G_OPTION_ARG_FILENAME , & global_opt . pidfile , N_ ( " Specify the location of a PID file " ) , NM_DEFAULT_PID_FILE } ,
2015-03-13 19:59:32 +01:00
{ " run-from-build-dir " , 0 , 0 , G_OPTION_ARG_NONE , & global_opt . run_from_build_dir , " Run from build directory " , NULL } ,
2015-11-20 17:52:54 +01:00
{ " print-config " , 0 , 0 , G_OPTION_ARG_NONE , & global_opt . print_config , N_ ( " Print NetworkManager configuration and exit " ) , NULL } ,
2006-10-13 19:41:47 +00:00
{ NULL }
} ;
2014-10-29 09:36:47 -05:00
if ( ! nm_main_utils_early_setup ( " NetworkManager " ,
2015-03-13 20:09:46 +01:00
argc ,
argv ,
2014-10-29 09:36:47 -05:00
options ,
2014-07-09 15:17:01 +02:00
( void ( * ) ( gpointer , GOptionContext * ) ) nm_config_cmd_line_options_add_to_entries ,
config_cli ,
2014-10-29 09:36:47 -05:00
_ ( " NetworkManager monitors all network connections and automatically \n chooses the best connection to use. It also allows the user to \n specify wireless access points which wireless cards in the computer \n should associate with. " ) ) )
2007-10-08 18:38:34 +00:00
exit ( 1 ) ;
2004-06-24 14:18:37 +00:00
2018-04-24 11:20:03 +02:00
global_opt . pidfile = global_opt . pidfile ? : g_strdup ( NM_DEFAULT_PID_FILE ) ;
2015-03-13 20:09:46 +01:00
}
core/dbus: let NMDBusManager create a D-Bus connection in "configure-and-quit=true" mode
Note that various components (NMFirewallManager, NMAuthManager,
NMConnectivity, etc.pp) all request their own GDBusConnection from
glib's G_BUS_TYPE_SYSTEM singleton.
In the future, let them instead use the D-Bus connection that
NMDBusManager already has.
- NMDBusManager also uses g_bus_get(G_BUS_TYPE_SYSTEM), so in practice this
is just the same GDBusConnection instance.
- if it would not be the same GDBusConnection instance, it would
be more correct/logical to use the one that NMDBusManager uses.
- NMDBusManager already aquired the GDBusConnection synchronously
and it's ready for use. On the other hand, g_bus_get()/g_bus_get_sync()
has the notion that getting the singleton cannot be done without
waiting/blocking. So at least it involves locking or even dispatching
the async reply on D-Bus.
- in "configure-and-quit=initrd" we really don't have D-Bus available.
NMDBusManager should control whether the other components use D-Bus
or not. For example, NMFirewallManager should not ask glib for a
G_BUS_TYPE_SYSTEM singleton only to later find out that it doesn't work.
So if these components would reuse NMDBusManager's GDBusConnection,
then it must have the connection also in regular "configure-and-quit=true"
mode. In this case, we are in late boot and want do connectivity
checking and talk to firewalld.
2019-05-04 13:43:19 +02:00
static gboolean
_dbus_manager_init ( NMConfig * config )
{
NMDBusManager * busmgr ;
NMConfigConfigureAndQuitType c_a_q_type ;
busmgr = nm_dbus_manager_get ( ) ;
c_a_q_type = nm_config_get_configure_and_quit ( config ) ;
if ( c_a_q_type = = NM_CONFIG_CONFIGURE_AND_QUIT_DISABLED )
return nm_dbus_manager_acquire_bus ( busmgr , TRUE ) ;
if ( c_a_q_type = = NM_CONFIG_CONFIGURE_AND_QUIT_ENABLED ) {
/* D-Bus is useless in configure and quit mode -- we're eventually dropping
* off and potential clients would have no way of knowing whether we ' re
* finished already or didn ' t start yet .
*
* But we still create a nm_dbus_manager_get_dbus_connection ( ) D - Bus connection
* so that we can talk to other services like firewalld . */
return nm_dbus_manager_acquire_bus ( busmgr , FALSE ) ;
}
nm_assert ( c_a_q_type = = NM_CONFIG_CONFIGURE_AND_QUIT_INITRD ) ;
/* in initrd we don't have D-Bus at all. Don't even try to get the G_BUS_TYPE_SYSTEM
* connection . And of course don ' t claim the D - Bus name . */
return TRUE ;
}
2015-03-13 20:09:46 +01:00
/*
* main
*
*/
int
main ( int argc , char * argv [ ] )
{
2015-03-13 23:29:46 +01:00
gboolean success = FALSE ;
2018-04-21 21:22:05 +02:00
NMManager * manager = NULL ;
2015-03-13 20:09:46 +01:00
NMConfig * config ;
2018-06-21 08:03:44 +02:00
gs_free_error GError * error = NULL ;
2015-03-13 20:09:46 +01:00
gboolean wrote_pidfile = FALSE ;
char * bad_domains = NULL ;
NMConfigCmdLineOptions * config_cli ;
2016-03-09 12:27:56 +01:00
guint sd_id = 0 ;
2018-06-21 08:04:33 +02:00
GError * error_invalid_logging_config = NULL ;
2018-11-09 18:06:32 +01:00
const char * const * warnings ;
2019-01-31 13:29:21 +01:00
int errsv ;
2015-03-13 20:09:46 +01:00
2015-09-15 18:25:11 +02:00
/* Known to cause a possible deadlock upon GDBus initialization:
2015-09-08 16:50:53 +02:00
* https : //bugzilla.gnome.org/show_bug.cgi?id=674885 */
g_type_ensure ( G_TYPE_SOCKET ) ;
g_type_ensure ( G_TYPE_DBUS_CONNECTION ) ;
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
g_type_ensure ( NM_TYPE_DBUS_MANAGER ) ;
2015-09-08 16:50:53 +02:00
2015-03-13 20:09:46 +01:00
_nm_utils_is_manager_process = TRUE ;
main_loop = g_main_loop_new ( NULL , FALSE ) ;
2017-04-19 15:56:24 +02:00
/* we determine a first-start (contrary to a restart during the same boot)
* based on the existence of NM_CONFIG_DEVICE_STATE_DIR directory . */
config_cli = nm_config_cmd_line_options_new ( ! g_file_test ( NM_CONFIG_DEVICE_STATE_DIR ,
G_FILE_TEST_IS_DIR ) ) ;
2015-03-13 20:09:46 +01:00
do_early_setup ( & argc , & argv , config_cli ) ;
2015-03-31 07:41:56 +02:00
if ( global_opt . g_fatal_warnings )
_set_g_fatal_warnings ( ) ;
2015-03-13 23:24:05 +01:00
2015-03-13 19:59:32 +01:00
if ( global_opt . show_version ) {
2011-04-28 16:53:12 -05:00
fprintf ( stdout , NM_DIST_VERSION " \n " ) ;
exit ( 0 ) ;
}
2015-11-20 17:52:54 +01:00
if ( global_opt . print_config ) {
int result ;
result = print_config ( config_cli ) ;
nm_config_cmd_line_options_free ( config_cli ) ;
exit ( result ) ;
}
2015-03-13 20:57:18 +01:00
nm_main_utils_ensure_root ( ) ;
2015-03-14 17:58:23 +01:00
nm_main_utils_ensure_not_running_pidfile ( global_opt . pidfile ) ;
2016-03-21 12:01:26 +01:00
nm_main_utils_ensure_statedir ( ) ;
2015-03-15 15:59:02 +01:00
nm_main_utils_ensure_rundir ( ) ;
2013-05-22 16:12:23 +02:00
/* When running from the build directory, determine our build directory
* base and set helper paths in the build tree */
2015-03-13 19:59:32 +01:00
if ( global_opt . run_from_build_dir ) {
2013-05-22 16:12:23 +02:00
char * path , * slash ;
2013-06-14 14:51:04 -04:00
int g ;
2013-05-22 16:12:23 +02:00
/* exe is <basedir>/src/.libs/lt-NetworkManager, so chop off
* the last three components */
path = realpath ( " /proc/self/exe " , NULL ) ;
g_assert ( path ! = NULL ) ;
2013-06-14 14:51:04 -04:00
for ( g = 0 ; g < 3 ; + + g ) {
2013-05-22 16:12:23 +02:00
slash = strrchr ( path , ' / ' ) ;
g_assert ( slash ! = NULL ) ;
* slash = ' \0 ' ;
}
/* don't free these strings, we need them for the entire
* process lifetime */
2016-11-21 00:26:17 +01:00
nm_dhcp_helper_path = g_strdup_printf ( " %s/src/dhcp/nm-dhcp-helper " , path ) ;
2013-05-22 16:12:23 +02:00
g_free ( path ) ;
}
2015-03-15 16:00:36 +01:00
if ( ! nm_logging_setup ( global_opt . opt_log_level ,
global_opt . opt_log_domains ,
& bad_domains ,
& error ) ) {
fprintf ( stderr ,
_ ( " %s. Please use --help to see a list of valid options. \n " ) ,
error - > message ) ;
exit ( 1 ) ;
}
2011-09-22 10:16:07 -05:00
/* Read the config file and CLI overrides */
2017-02-17 10:53:33 +01:00
config = nm_config_setup ( config_cli , CONFIG_ATOMIC_SECTION_PREFIXES , & error ) ;
2014-07-09 15:17:01 +02:00
nm_config_cmd_line_options_free ( config_cli ) ;
config_cli = NULL ;
2011-09-22 10:16:07 -05:00
if ( config = = NULL ) {
2016-02-28 18:12:28 +01:00
fprintf ( stderr , _ ( " Failed to read configuration: %s \n " ) ,
2016-02-28 16:25:36 +01:00
error - > message ) ;
2011-09-22 10:16:07 -05:00
exit ( 1 ) ;
2009-06-11 00:39:12 -04:00
}
2011-09-22 10:16:07 -05:00
2016-11-25 15:34:40 +01:00
_init_nm_debug ( config ) ;
2015-03-13 23:19:58 +01:00
2013-06-14 14:51:04 -04:00
/* Initialize logging from config file *only* if not explicitly
* specified by commandline .
*/
2015-03-13 19:59:32 +01:00
if ( global_opt . opt_log_level = = NULL & & global_opt . opt_log_domains = = NULL ) {
2013-06-14 14:51:04 -04:00
if ( ! nm_logging_setup ( nm_config_get_log_level ( config ) ,
nm_config_get_log_domains ( config ) ,
2013-11-26 11:33:42 -05:00
& bad_domains ,
2018-06-21 08:04:33 +02:00
& error_invalid_logging_config ) ) {
2018-06-21 08:14:11 +02:00
/* ignore error, and print the failure reason below.
* Likewise , print about bad_domains below . */
2013-06-14 14:51:04 -04:00
}
2010-04-06 15:57:24 -07:00
}
2015-07-08 19:39:00 +02:00
if ( global_opt . become_daemon & & ! nm_config_get_is_debug ( config ) ) {
2006-10-13 19:41:47 +00:00
if ( daemon ( 0 , 0 ) < 0 ) {
2019-01-31 13:29:21 +01:00
errsv = errno ;
2011-12-21 14:46:59 +01:00
fprintf ( stderr , _ ( " Could not daemonize: %s [error %u] \n " ) ,
2019-01-31 17:08:03 +01:00
nm_strerror_native ( errsv ) ,
2019-01-31 13:29:21 +01:00
errsv ) ;
2007-10-08 18:38:34 +00:00
exit ( 1 ) ;
2006-02-27 20:06:24 +00:00
}
2015-03-13 19:59:32 +01:00
wrote_pidfile = nm_main_utils_write_pidfile ( global_opt . pidfile ) ;
2005-01-22 04:26:48 +00:00
}
2012-09-13 05:32:53 -04:00
/* Set up unix signal handling - before creating threads, but after daemonizing! */
2015-01-12 11:31:10 -05:00
nm_main_utils_setup_signals ( main_loop ) ;
2012-09-13 05:32:53 -04:00
2017-10-24 08:35:42 +02:00
{
gs_free char * v = NULL ;
v = nm_config_data_get_value ( NM_CONFIG_GET_DATA_ORIG ,
NM_CONFIG_KEYFILE_GROUP_LOGGING ,
NM_CONFIG_KEYFILE_KEY_LOGGING_BACKEND ,
NM_CONFIG_GET_VALUE_STRIP | NM_CONFIG_GET_VALUE_NO_EMPTY ) ;
2019-01-16 16:16:49 +01:00
nm_logging_init ( v , nm_config_get_is_debug ( config ) ) ;
2017-10-24 08:35:42 +02:00
}
2013-06-14 14:51:04 -04:00
2017-04-19 15:56:24 +02:00
nm_log_info ( LOGD_CORE , " NetworkManager (version " NM_DIST_VERSION " ) is starting... (%s) " ,
nm_config_get_first_start ( config ) ? " for the first time " : " after a restart " ) ;
2015-03-13 23:29:46 +01:00
2015-01-06 20:16:10 +01:00
nm_log_info ( LOGD_CORE , " Read config: %s " , nm_config_data_get_config_description ( nm_config_get_data ( config ) ) ) ;
2015-11-20 17:52:54 +01:00
nm_config_data_log ( nm_config_get_data ( config ) , " CONFIG: " , " " , NULL ) ;
2015-11-06 16:45:27 +01:00
2018-06-21 08:04:33 +02:00
if ( error_invalid_logging_config ) {
nm_log_warn ( LOGD_CORE , " config: invalid logging configuration: %s " , error_invalid_logging_config - > message ) ;
g_clear_error ( & error_invalid_logging_config ) ;
}
2018-06-21 08:14:11 +02:00
if ( bad_domains ) {
nm_log_warn ( LOGD_CORE , " config: invalid logging domains '%s' from %s " ,
bad_domains ,
( global_opt . opt_log_level = = NULL & & global_opt . opt_log_domains = = NULL )
? " config file "
: " command line " ) ;
nm_clear_g_free ( & bad_domains ) ;
}
2018-06-21 08:04:33 +02:00
2018-11-09 18:06:32 +01:00
warnings = nm_config_get_warnings ( config ) ;
for ( ; warnings & & * warnings ; warnings + + )
nm_log_warn ( LOGD_CORE , " config: %s " , * warnings ) ;
nm_config_clear_warnings ( config ) ;
2016-04-07 18:42:24 +02:00
/* the first access to State causes the file to be read (and possibly print a warning) */
nm_config_state_get ( config ) ;
2015-11-06 16:45:27 +01:00
2015-04-17 13:06:31 +02:00
nm_log_dbg ( LOGD_CORE , " WEXT support is %s " ,
2012-04-26 13:38:41 -05:00
# if HAVE_WEXT
" enabled "
# else
" disabled "
# endif
) ;
2010-07-21 16:47:31 -07:00
2019-05-05 09:29:43 +02:00
if ( ! _dbus_manager_init ( config ) )
goto done_no_manager ;
2017-04-17 18:40:52 +02:00
nm_linux_platform_setup ( ) ;
NM_UTILS_KEEP_ALIVE ( config , nm_netns_get ( ) , " NMConfig-depends-on-NMNetns " ) ;
2016-11-25 14:39:59 +01:00
nm_auth_manager_setup ( nm_config_data_get_value_boolean ( nm_config_get_data_orig ( config ) ,
NM_CONFIG_KEYFILE_GROUP_MAIN ,
NM_CONFIG_KEYFILE_KEY_MAIN_AUTH_POLKIT ,
NM_CONFIG_DEFAULT_MAIN_AUTH_POLKIT_BOOL ) ) ;
core: better order the code at startup
NM was calling nm_bus_manager_start_service() to claim its bus name
before it exported any of its objects, but this didn't matter under
dbus-glib, because no client connections would be accepted until the
main loop was started later on, by which point we would have exported
everything.
But with gdbus, method calls are initially received in the gdbus
worker thread, which means that clients would be able to connect right
away and then be told that the expected interfaces don't exist.
So move the nm_bus_manager_start_service() call to occur after
creating NMSettings and NMManager (and, indirectly, NMAgentManager).
This requires splitting out the slow parts of nm_settings_new() into a
new nm_settings_start(), so that we can create and export it first,
and then read the connections, etc afterward. (Likewise, there were
still a few potentially-slow bits in nm_manager_new() which are now
moved into nm_manager_start().)
2015-07-31 13:00:22 -04:00
2018-04-21 21:22:05 +02:00
manager = nm_manager_setup ( ) ;
core/dbus: let NMDBusManager create a D-Bus connection in "configure-and-quit=true" mode
Note that various components (NMFirewallManager, NMAuthManager,
NMConnectivity, etc.pp) all request their own GDBusConnection from
glib's G_BUS_TYPE_SYSTEM singleton.
In the future, let them instead use the D-Bus connection that
NMDBusManager already has.
- NMDBusManager also uses g_bus_get(G_BUS_TYPE_SYSTEM), so in practice this
is just the same GDBusConnection instance.
- if it would not be the same GDBusConnection instance, it would
be more correct/logical to use the one that NMDBusManager uses.
- NMDBusManager already aquired the GDBusConnection synchronously
and it's ready for use. On the other hand, g_bus_get()/g_bus_get_sync()
has the notion that getting the singleton cannot be done without
waiting/blocking. So at least it involves locking or even dispatching
the async reply on D-Bus.
- in "configure-and-quit=initrd" we really don't have D-Bus available.
NMDBusManager should control whether the other components use D-Bus
or not. For example, NMFirewallManager should not ask glib for a
G_BUS_TYPE_SYSTEM singleton only to later find out that it doesn't work.
So if these components would reuse NMDBusManager's GDBusConnection,
then it must have the connection also in regular "configure-and-quit=true"
mode. In this case, we are in late boot and want do connectivity
checking and talk to firewalld.
2019-05-04 13:43:19 +02:00
2018-04-21 21:22:05 +02:00
nm_dbus_manager_start ( nm_dbus_manager_get ( ) ,
nm_manager_dbus_set_property_handle ,
manager ) ;
2015-03-13 23:34:47 +01:00
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
g_signal_connect ( manager , NM_MANAGER_CONFIGURE_QUIT , G_CALLBACK ( manager_configure_quit ) , config ) ;
2014-04-02 12:41:04 -05:00
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
if ( ! nm_manager_start ( manager , & error ) ) {
2015-08-03 09:26:31 -04:00
nm_log_err ( LOGD_CORE , " failed to initialize: %s " , error - > message ) ;
goto done ;
}
2009-06-11 00:39:12 -04:00
2017-03-14 10:42:36 +01:00
nm_platform_process_events ( NM_PLATFORM_GET ) ;
2012-08-01 21:10:15 +02:00
/* Make sure the loopback interface is up. If interface is down, we bring
* it up and kernel will assign it link - local IPv4 and IPv6 addresses . If
* it was already up , we assume is in clean state .
*
* TODO : it might be desirable to check the list of addresses and compare
* it with a list of expected addresses ( one of the protocol families
* could be disabled ) . The ' lo ' interface is sometimes used for assigning
* global addresses so their availability doesn ' t depend on the state of
* physical interfaces .
*/
nm_log_dbg ( LOGD_CORE , " setting up local loopback " ) ;
2015-06-15 17:41:27 +02:00
nm_platform_link_set_up ( NM_PLATFORM_GET , 1 , NULL ) ;
2004-08-16 19:46:43 +00:00
2009-02-15 10:14:34 -05:00
success = TRUE ;
2016-03-09 12:27:56 +01:00
if ( configure_and_quit = = FALSE ) {
sd_id = nm_sd_event_attach_default ( ) ;
2015-03-11 08:23:23 -05:00
g_main_loop_run ( main_loop ) ;
2016-03-09 12:27:56 +01:00
}
2007-09-25 08:23:07 +00:00
2015-11-13 15:35:40 +01:00
done :
2016-09-23 17:36:21 +02:00
/* write the device-state to file. Note that we only persist the
* state here . We don ' t bother updating the state as devices
* change during regular operation . If NM is killed with SIGKILL ,
* it misses to update the state . */
2018-07-06 21:09:58 +02:00
nm_manager_write_device_state_all ( manager ) ;
2016-09-23 17:36:21 +02:00
core/dbus: rework D-Bus implementation to use lower layer GDBusConnection API
Previously, we used the generated GDBusInterfaceSkeleton types and glued
them via the NMExportedObject base class to our NM types. We also used
GDBusObjectManagerServer.
Don't do that anymore. The resulting code was more complicated despite (or
because?) using generated classes. It was hard to understand, complex, had
ordering-issues, and had a runtime and memory overhead.
This patch refactors this entirely and uses the lower layer API GDBusConnection
directly. It replaces the generated code, GDBusInterfaceSkeleton, and
GDBusObjectManagerServer. All this is now done by NMDbusObject and NMDBusManager
and static descriptor instances of type GDBusInterfaceInfo.
This adds a net plus of more then 1300 lines of hand written code. I claim
that this implementation is easier to understand. Note that previously we
also required extensive and complex glue code to bind our objects to the
generated skeleton objects. Instead, now glue our objects directly to
GDBusConnection. The result is more immediate and gets rid of layers of
code in between.
Now that the D-Bus glue us more under our control, we can address issus and
bottlenecks better, instead of adding code to bend the generated skeletons
to our needs.
Note that the current implementation now only supports one D-Bus connection.
That was effectively the case already, although there were places (and still are)
where the code pretends it could also support connections from a private socket.
We dropped private socket support mainly because it was unused, untested and
buggy, but also because GDBusObjectManagerServer could not export the same
objects on multiple connections. Now, it would be rather straight forward to
fix that and re-introduce ObjectManager on each private connection. But this
commit doesn't do that yet, and the new code intentionally supports only one
D-Bus connection.
Also, the D-Bus startup was simplified. There is no retry, either nm_dbus_manager_start()
succeeds, or it detects the initrd case. In the initrd case, bus manager never tries to
connect to D-Bus. Since the initrd scenario is not yet used/tested, this is good enough
for the moment. It could be easily extended later, for example with polling whether the
system bus appears (like was done previously). Also, restart of D-Bus daemon isn't
supported either -- just like before.
Note how NMDBusManager now implements the ObjectManager D-Bus interface
directly.
Also, this fixes race issues in the server, by no longer delaying
PropertiesChanged signals. NMExportedObject would collect changed
properties and send the signal out in idle_emit_properties_changed()
on idle. This messes up the ordering of change events w.r.t. other
signals and events on the bus. Note that not only NMExportedObject
messed up the ordering. Also the generated code would hook into
notify() and process change events in and idle handle, exhibiting the
same ordering issue too.
No longer do that. PropertiesChanged signals will be sent right away
by hooking into dispatch_properties_changed(). This means, changing
a property in quick succession will no longer be combined and is
guaranteed to emit signals for each individual state. Quite possibly
we emit now more PropertiesChanged signals then before.
However, we are now able to group a set of changes by using standard
g_object_freeze_notify()/g_object_thaw_notify(). We probably should
make more use of that.
Also, now that our signals are all handled in the right order, we
might find places where we still emit them in the wrong order. But that
is then due to the order in which our GObjects emit signals, not due
to an ill behavior of the D-Bus glue. Possibly we need to identify
such ordering issues and fix them.
Numbers (for contrib/rpm --without debug on x86_64):
- the patch changes the code size of NetworkManager by
- 2809360 bytes
+ 2537528 bytes (-9.7%)
- Runtime measurements are harder because there is a large variance
during testing. In other words, the numbers are not reproducible.
Currently, the implementation performs no caching of GVariants at all,
but it would be rather simple to add it, if that turns out to be
useful.
Anyway, without strong claim, it seems that the new form tends to
perform slightly better. That would be no surprise.
$ time (for i in {1..1000}; do nmcli >/dev/null || break; echo -n .; done)
- real 1m39.355s
+ real 1m37.432s
$ time (for i in {1..2000}; do busctl call org.freedesktop.NetworkManager /org/freedesktop org.freedesktop.DBus.ObjectManager GetManagedObjects > /dev/null || break; echo -n .; done)
- real 0m26.843s
+ real 0m25.281s
- Regarding RSS size, just looking at the processes in similar
conditions, doesn't give a large difference. On my system they
consume about 19MB RSS. It seems that the new version has a
slightly smaller RSS size.
- 19356 RSS
+ 18660 RSS
2018-02-26 13:51:52 +01:00
nm_manager_stop ( manager ) ;
2004-08-23 19:20:49 +00:00
2016-04-07 18:42:24 +02:00
nm_config_state_set ( config , TRUE , TRUE ) ;
2015-11-06 16:45:27 +01:00
2017-02-04 20:42:42 +01:00
nm_dns_manager_stop ( nm_dns_manager_get ( ) ) ;
2019-05-03 14:33:23 +02:00
nm_settings_kf_db_write ( NM_SETTINGS_GET ) ;
2018-04-21 21:22:05 +02:00
done_no_manager :
2015-03-13 19:59:32 +01:00
if ( global_opt . pidfile & & wrote_pidfile )
unlink ( global_opt . pidfile ) ;
2006-02-27 05:50:28 +00:00
2010-04-06 15:48:31 -07:00
nm_log_info ( LOGD_CORE , " exiting (%s) " , success ? " success " : " error " ) ;
2016-03-09 12:27:56 +01:00
nm_clear_g_source ( & sd_id ) ;
2009-02-15 10:14:34 -05:00
exit ( success ? 0 : 1 ) ;
2004-06-24 14:18:37 +00:00
}