2020-12-23 22:21:36 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
2019-09-25 13:13:40 +02:00
|
|
|
/*
|
2011-10-21 21:21:30 +02:00
|
|
|
* Copyright (C) 2011 Thomas Bechtold <thomasbechtold@jpberlin.de>
|
2011-12-05 14:48:47 -06:00
|
|
|
* Copyright (C) 2011 Dan Williams <dcbw@redhat.com>
|
2017-06-22 16:46:13 +02:00
|
|
|
* Copyright (C) 2016 - 2018 Red Hat, Inc.
|
2011-10-21 21:21:30 +02:00
|
|
|
*/
|
|
|
|
|
|
2021-02-04 18:04:13 +01:00
|
|
|
#include "src/core/nm-default-daemon.h"
|
2011-10-21 21:21:30 +02:00
|
|
|
|
2016-09-29 13:49:01 +02:00
|
|
|
#include "nm-connectivity.h"
|
|
|
|
|
|
2018-02-19 19:50:18 +01:00
|
|
|
#if WITH_CONCHECK
|
2021-07-09 08:48:48 +02:00
|
|
|
#include <curl/curl.h>
|
2018-02-19 19:50:18 +01:00
|
|
|
#endif
|
2019-06-17 15:31:59 +02:00
|
|
|
#include <linux/rtnetlink.h>
|
2011-10-21 21:21:30 +02:00
|
|
|
|
2018-04-06 16:40:23 +02:00
|
|
|
#include "c-list/src/c-list.h"
|
2021-08-06 15:17:05 +02:00
|
|
|
#include "libnm-platform/nmp-object.h"
|
2021-02-12 15:01:09 +01:00
|
|
|
#include "libnm-core-intern/nm-core-internal.h"
|
2015-02-19 14:24:37 +01:00
|
|
|
#include "nm-config.h"
|
2016-02-01 13:32:02 +01:00
|
|
|
#include "NetworkManagerUtils.h"
|
2018-12-01 15:48:19 +01:00
|
|
|
#include "nm-dbus-manager.h"
|
|
|
|
|
#include "dns/nm-dns-manager.h"
|
2011-10-21 21:21:30 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
#define HEADER_STATUS_ONLINE "X-NetworkManager-Status: online\r\n"
|
|
|
|
|
|
2016-09-29 13:49:01 +02:00
|
|
|
/*****************************************************************************/
|
2011-12-13 17:35:36 -06:00
|
|
|
|
2020-02-13 14:55:26 +01:00
|
|
|
static NM_UTILS_LOOKUP_STR_DEFINE(_state_to_string,
|
|
|
|
|
int /*NMConnectivityState*/,
|
2018-02-19 19:47:26 +01:00
|
|
|
NM_UTILS_LOOKUP_DEFAULT_WARN("???"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_UNKNOWN, "UNKNOWN"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_NONE, "NONE"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_LIMITED, "LIMITED"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_PORTAL, "PORTAL"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_FULL, "FULL"),
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_ERROR, "ERROR"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_FAKE, "FAKE"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_CANCELLED, "CANCELLED"),
|
|
|
|
|
NM_UTILS_LOOKUP_STR_ITEM(NM_CONNECTIVITY_DISPOSING,
|
|
|
|
|
"DISPOSING"), );
|
2018-02-19 19:47:26 +01:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
const char *
|
|
|
|
|
nm_connectivity_state_to_string(NMConnectivityState state)
|
|
|
|
|
{
|
|
|
|
|
return _state_to_string(state);
|
|
|
|
|
}
|
|
|
|
|
|
2018-02-19 19:47:26 +01:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
typedef struct {
|
|
|
|
|
guint ref_count;
|
|
|
|
|
char *uri;
|
|
|
|
|
char *host;
|
|
|
|
|
char *port;
|
|
|
|
|
char *response;
|
|
|
|
|
} ConConfig;
|
|
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
struct _NMConnectivityCheckHandle {
|
|
|
|
|
CList handles_lst;
|
2021-11-09 13:28:54 +01:00
|
|
|
NMConnectivity *self;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
NMConnectivityCheckCallback callback;
|
|
|
|
|
gpointer user_data;
|
|
|
|
|
|
|
|
|
|
char *ifspec;
|
|
|
|
|
|
2018-12-01 15:01:17 +01:00
|
|
|
const char *completed_log_message;
|
2021-11-09 13:28:54 +01:00
|
|
|
char *completed_log_message_free;
|
2018-12-01 15:01:17 +01:00
|
|
|
|
2018-02-19 19:50:18 +01:00
|
|
|
#if WITH_CONCHECK
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
struct {
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
ConConfig *con_config;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
2021-11-09 13:28:54 +01:00
|
|
|
GCancellable *resolve_cancellable;
|
|
|
|
|
CURLM *curl_mhandle;
|
|
|
|
|
CURL *curl_ehandle;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
struct curl_slist *request_headers;
|
2017-06-22 16:46:13 +02:00
|
|
|
struct curl_slist *hosts;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
gsize response_good_cnt;
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
|
|
|
|
|
guint curl_timer;
|
|
|
|
|
int ch_ifindex;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
} concheck;
|
|
|
|
|
#endif
|
2018-02-19 19:50:18 +01:00
|
|
|
|
2018-12-01 15:01:17 +01:00
|
|
|
guint64 request_counter;
|
2018-12-01 14:22:19 +01:00
|
|
|
|
|
|
|
|
int addr_family;
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
|
2022-03-17 10:22:55 +01:00
|
|
|
GSource *timeout_source;
|
2018-12-01 14:22:19 +01:00
|
|
|
|
|
|
|
|
NMConnectivityState completed_state;
|
2021-11-09 13:28:54 +01:00
|
|
|
const char *completed_reason;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
};
|
2018-01-05 15:52:39 +01:00
|
|
|
|
|
|
|
|
enum {
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
CONFIG_CHANGED,
|
2018-01-05 15:52:39 +01:00
|
|
|
|
|
|
|
|
LAST_SIGNAL
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static guint signals[LAST_SIGNAL] = {0};
|
|
|
|
|
|
2011-10-21 21:21:30 +02:00
|
|
|
typedef struct {
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
CList handles_lst_head;
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
CList completed_handles_lst_head;
|
2021-11-09 13:28:54 +01:00
|
|
|
NMConfig *config;
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
ConConfig *con_config;
|
2018-12-01 15:48:19 +01:00
|
|
|
guint interval;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-12-01 15:48:19 +01:00
|
|
|
bool enabled : 1;
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
bool uri_valid : 1;
|
2011-10-21 21:21:30 +02:00
|
|
|
} NMConnectivityPrivate;
|
|
|
|
|
|
2016-09-29 13:49:01 +02:00
|
|
|
struct _NMConnectivity {
|
|
|
|
|
GObject parent;
|
|
|
|
|
NMConnectivityPrivate _priv;
|
2011-10-21 21:21:30 +02:00
|
|
|
};
|
|
|
|
|
|
2016-09-29 13:49:01 +02:00
|
|
|
struct _NMConnectivityClass {
|
|
|
|
|
GObjectClass parent;
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
G_DEFINE_TYPE(NMConnectivity, nm_connectivity, G_TYPE_OBJECT)
|
|
|
|
|
|
|
|
|
|
#define NM_CONNECTIVITY_GET_PRIVATE(self) _NM_GET_PRIVATE(self, NMConnectivity, NM_IS_CONNECTIVITY)
|
|
|
|
|
|
2017-03-20 13:36:00 +00:00
|
|
|
NM_DEFINE_SINGLETON_GETTER(NMConnectivity, nm_connectivity_get, NM_TYPE_CONNECTIVITY);
|
|
|
|
|
|
2016-09-29 13:49:01 +02:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2016-10-14 15:32:56 +02:00
|
|
|
#define _NMLOG_DOMAIN LOGD_CONCHECK
|
|
|
|
|
#define _NMLOG(level, ...) __NMLOG_DEFAULT(level, _NMLOG_DOMAIN, "connectivity", __VA_ARGS__)
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-03-20 13:36:00 +00:00
|
|
|
#define _NMLOG2_DOMAIN LOGD_CONCHECK
|
|
|
|
|
#define _NMLOG2(level, ...) \
|
|
|
|
|
G_STMT_START \
|
|
|
|
|
{ \
|
|
|
|
|
const NMLogLevel __level = (level); \
|
|
|
|
|
\
|
|
|
|
|
if (nm_logging_enabled(__level, _NMLOG2_DOMAIN)) { \
|
|
|
|
|
_nm_log(__level, \
|
|
|
|
|
_NMLOG2_DOMAIN, \
|
|
|
|
|
0, \
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
(cb_data->ifspec ? &cb_data->ifspec[3] : NULL), \
|
|
|
|
|
NULL, \
|
2018-12-01 15:01:17 +01:00
|
|
|
"connectivity: (%s,IPv%c,%" G_GUINT64_FORMAT \
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
") " _NM_UTILS_MACRO_FIRST(__VA_ARGS__), \
|
2018-07-03 19:20:45 +02:00
|
|
|
(cb_data->ifspec ? &cb_data->ifspec[3] : ""), \
|
2018-12-01 15:01:17 +01:00
|
|
|
nm_utils_addr_family_to_char(cb_data->addr_family), \
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
cb_data->request_counter _NM_UTILS_MACRO_REST(__VA_ARGS__)); \
|
2020-09-28 16:03:33 +02:00
|
|
|
} \
|
2017-03-20 13:36:00 +00:00
|
|
|
} \
|
|
|
|
|
G_STMT_END
|
2011-10-21 21:21:30 +02:00
|
|
|
|
2017-03-20 13:36:00 +00:00
|
|
|
/*****************************************************************************/
|
2011-10-21 21:21:30 +02:00
|
|
|
|
2019-04-10 21:22:22 +02:00
|
|
|
#if WITH_CONCHECK
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
static ConConfig *
|
|
|
|
|
_con_config_ref(ConConfig *con_config)
|
|
|
|
|
{
|
|
|
|
|
if (con_config) {
|
|
|
|
|
nm_assert(con_config->ref_count > 0);
|
|
|
|
|
++con_config->ref_count;
|
|
|
|
|
}
|
|
|
|
|
return con_config;
|
|
|
|
|
}
|
2019-04-10 21:22:22 +02:00
|
|
|
#endif
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
_con_config_unref(ConConfig *con_config)
|
|
|
|
|
{
|
|
|
|
|
if (!con_config)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
nm_assert(con_config->ref_count > 0);
|
|
|
|
|
|
|
|
|
|
if (--con_config->ref_count != 0)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
g_free(con_config->uri);
|
|
|
|
|
g_free(con_config->host);
|
|
|
|
|
g_free(con_config->port);
|
|
|
|
|
g_free(con_config->response);
|
|
|
|
|
g_slice_free(ConConfig, con_config);
|
|
|
|
|
}
|
|
|
|
|
|
2019-04-10 21:22:22 +02:00
|
|
|
#if WITH_CONCHECK
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
static const char *
|
|
|
|
|
_con_config_get_response(const ConConfig *con_config)
|
|
|
|
|
{
|
|
|
|
|
return con_config->response ?: NM_CONFIG_DEFAULT_CONNECTIVITY_RESPONSE;
|
|
|
|
|
}
|
2019-04-10 21:22:22 +02:00
|
|
|
#endif
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
|
|
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2017-03-22 15:18:37 +01:00
|
|
|
static void
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
cb_data_complete(NMConnectivityCheckHandle *cb_data,
|
|
|
|
|
NMConnectivityState state,
|
2021-11-09 13:28:54 +01:00
|
|
|
const char *log_message)
|
2017-03-22 15:18:37 +01:00
|
|
|
{
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
NMConnectivity *self;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
|
|
|
|
nm_assert(cb_data);
|
|
|
|
|
nm_assert(NM_IS_CONNECTIVITY(cb_data->self));
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
nm_assert(cb_data->callback);
|
|
|
|
|
nm_assert(state != NM_CONNECTIVITY_UNKNOWN);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
nm_assert(log_message);
|
|
|
|
|
|
|
|
|
|
self = cb_data->self;
|
|
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
/* mark the handle as completing. After this point, nm_connectivity_check_cancel()
|
|
|
|
|
* is no longer possible. */
|
|
|
|
|
cb_data->self = NULL;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
c_list_unlink_stale(&cb_data->handles_lst);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
|
|
|
|
#if WITH_CONCHECK
|
|
|
|
|
if (cb_data->concheck.curl_ehandle) {
|
|
|
|
|
/* Contrary to what cURL manual claim it is *not* safe to remove
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
* the easy handle "at any moment"; specifically it's not safe to
|
|
|
|
|
* remove *any* handle from within a libcurl callback. That is
|
|
|
|
|
* why we queue completed handles in this case.
|
|
|
|
|
*
|
|
|
|
|
* cb_data_complete() is however only called *not* from within a
|
|
|
|
|
* libcurl callback. So, this is fine. */
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
curl_easy_setopt(cb_data->concheck.curl_ehandle, CURLOPT_WRITEFUNCTION, NULL);
|
|
|
|
|
curl_easy_setopt(cb_data->concheck.curl_ehandle, CURLOPT_WRITEDATA, NULL);
|
|
|
|
|
curl_easy_setopt(cb_data->concheck.curl_ehandle, CURLOPT_HEADERFUNCTION, NULL);
|
|
|
|
|
curl_easy_setopt(cb_data->concheck.curl_ehandle, CURLOPT_HEADERDATA, NULL);
|
|
|
|
|
curl_easy_setopt(cb_data->concheck.curl_ehandle, CURLOPT_PRIVATE, NULL);
|
|
|
|
|
curl_easy_setopt(cb_data->concheck.curl_ehandle, CURLOPT_HTTPHEADER, NULL);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
curl_multi_remove_handle(cb_data->concheck.curl_mhandle, cb_data->concheck.curl_ehandle);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
curl_easy_cleanup(cb_data->concheck.curl_ehandle);
|
2018-07-03 20:23:06 +02:00
|
|
|
curl_multi_cleanup(cb_data->concheck.curl_mhandle);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
curl_slist_free_all(cb_data->concheck.request_headers);
|
2017-06-22 16:46:13 +02:00
|
|
|
curl_slist_free_all(cb_data->concheck.hosts);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
}
|
2018-07-03 20:23:06 +02:00
|
|
|
nm_clear_g_source(&cb_data->concheck.curl_timer);
|
2017-06-22 16:46:13 +02:00
|
|
|
nm_clear_g_cancellable(&cb_data->concheck.resolve_cancellable);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
#endif
|
|
|
|
|
|
2022-03-17 10:22:55 +01:00
|
|
|
nm_clear_g_source_inst(&cb_data->timeout_source);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
_LOG2D("check completed: %s; %s", nm_connectivity_state_to_string(state), log_message);
|
|
|
|
|
|
|
|
|
|
cb_data->callback(self, cb_data, state, cb_data->user_data);
|
|
|
|
|
|
|
|
|
|
/* Note: self might be a danling pointer at this point. It must not be used
|
|
|
|
|
* after this point, and all callers must either take a reference first, or
|
|
|
|
|
* not use the self pointer too. */
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
|
|
|
|
#if WITH_CONCHECK
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
_con_config_unref(cb_data->concheck.con_config);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
#endif
|
2017-07-19 22:13:57 +02:00
|
|
|
g_free(cb_data->ifspec);
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
if (cb_data->completed_log_message_free)
|
|
|
|
|
g_free(cb_data->completed_log_message_free);
|
2018-01-05 16:02:05 +01:00
|
|
|
g_slice_free(NMConnectivityCheckHandle, cb_data);
|
2017-03-22 15:18:37 +01:00
|
|
|
}
|
|
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
|
|
|
|
#if WITH_CONCHECK
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
cb_data_queue_completed(NMConnectivityCheckHandle *cb_data,
|
|
|
|
|
NMConnectivityState state,
|
2021-11-09 13:28:54 +01:00
|
|
|
const char *log_message_static,
|
|
|
|
|
char *log_message_take /* take */)
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
{
|
|
|
|
|
nm_assert(cb_data);
|
|
|
|
|
nm_assert(NM_IS_CONNECTIVITY(cb_data->self));
|
|
|
|
|
nm_assert(state != NM_CONNECTIVITY_UNKNOWN);
|
|
|
|
|
nm_assert(log_message_static || log_message_take);
|
|
|
|
|
nm_assert(cb_data->completed_state == NM_CONNECTIVITY_UNKNOWN);
|
|
|
|
|
nm_assert(!cb_data->completed_log_message);
|
|
|
|
|
nm_assert(c_list_contains(&NM_CONNECTIVITY_GET_PRIVATE(cb_data->self)->handles_lst_head,
|
|
|
|
|
&cb_data->handles_lst));
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
cb_data->completed_state = state;
|
|
|
|
|
cb_data->completed_log_message = log_message_static ?: log_message_take;
|
|
|
|
|
cb_data->completed_log_message_free = log_message_take;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
c_list_unlink_stale(&cb_data->handles_lst);
|
|
|
|
|
c_list_link_tail(&NM_CONNECTIVITY_GET_PRIVATE(cb_data->self)->completed_handles_lst_head,
|
|
|
|
|
&cb_data->handles_lst);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
_complete_queued(NMConnectivity *self)
|
|
|
|
|
{
|
2021-11-09 13:28:54 +01:00
|
|
|
NMConnectivity *self_keep_alive = NULL;
|
|
|
|
|
NMConnectivityPrivate *priv = NM_CONNECTIVITY_GET_PRIVATE(self);
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
NMConnectivityCheckHandle *cb_data;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
while ((cb_data = c_list_first_entry(&priv->completed_handles_lst_head,
|
|
|
|
|
NMConnectivityCheckHandle,
|
|
|
|
|
handles_lst))) {
|
|
|
|
|
if (!self_keep_alive)
|
|
|
|
|
self_keep_alive = g_object_ref(self);
|
|
|
|
|
cb_data_complete(cb_data, cb_data->completed_state, cb_data->completed_log_message);
|
|
|
|
|
}
|
|
|
|
|
nm_g_object_unref(self_keep_alive);
|
|
|
|
|
}
|
|
|
|
|
|
2018-07-24 15:24:50 +02:00
|
|
|
static gboolean
|
2018-07-24 15:02:33 +02:00
|
|
|
_con_curl_check_connectivity(CURLM *mhandle, int sockfd, int ev_bitmask)
|
2016-04-04 18:23:13 +02:00
|
|
|
{
|
2018-01-05 16:02:05 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data;
|
2021-11-09 13:28:54 +01:00
|
|
|
CURLMsg *msg;
|
all: don't use gchar/gshort/gint/glong but C types
We commonly don't use the glib typedefs for char/short/int/long,
but their C types directly.
$ git grep '\<g\(char\|short\|int\|long\|float\|double\)\>' | wc -l
587
$ git grep '\<\(char\|short\|int\|long\|float\|double\)\>' | wc -l
21114
One could argue that using the glib typedefs is preferable in
public API (of our glib based libnm library) or where it clearly
is related to glib, like during
g_object_set (obj, PROPERTY, (gint) value, NULL);
However, that argument does not seem strong, because in practice we don't
follow that argument today, and seldomly use the glib typedefs.
Also, the style guide for this would be hard to formalize, because
"using them where clearly related to a glib" is a very loose suggestion.
Also note that glib typedefs will always just be typedefs of the
underlying C types. There is no danger of glib changing the meaning
of these typedefs (because that would be a major API break of glib).
A simple style guide is instead: don't use these typedefs.
No manual actions, I only ran the bash script:
FILES=($(git ls-files '*.[hc]'))
sed -i \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\>\( [^ ]\)/\1\2/g' \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\> /\1 /g' \
-e 's/\<g\(char\|short\|int\|long\|float\|double\)\>/\1/g' \
"${FILES[@]}"
2018-07-11 07:40:19 +02:00
|
|
|
int m_left;
|
2018-02-08 21:28:02 +01:00
|
|
|
long response_code;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
CURLMcode ret;
|
|
|
|
|
int running_handles;
|
2018-07-24 15:24:50 +02:00
|
|
|
gboolean success = TRUE;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
ret = curl_multi_socket_action(mhandle, sockfd, ev_bitmask, &running_handles);
|
2018-07-24 15:24:50 +02:00
|
|
|
if (ret != CURLM_OK) {
|
2019-11-14 14:26:17 +01:00
|
|
|
_LOGD("connectivity check failed: (%d) %s", ret, curl_multi_strerror(ret));
|
2018-07-24 15:24:50 +02:00
|
|
|
success = FALSE;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
while ((msg = curl_multi_info_read(mhandle, &m_left))) {
|
2019-01-14 14:38:10 +01:00
|
|
|
const char *response;
|
2019-11-14 14:26:17 +01:00
|
|
|
CURLcode eret;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
if (msg->msg != CURLMSG_DONE)
|
|
|
|
|
continue;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
/* Here we have completed a session. Check easy session result. */
|
2017-05-12 09:37:42 +02:00
|
|
|
eret = curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, (char **) &cb_data);
|
2016-04-04 18:23:13 +02:00
|
|
|
if (eret != CURLE_OK) {
|
2018-07-03 14:09:01 +02:00
|
|
|
_LOGD("curl cannot extract cb_data for easy handle, skipping msg: (%d) %s",
|
|
|
|
|
eret,
|
|
|
|
|
curl_easy_strerror(eret));
|
2018-07-24 15:24:50 +02:00
|
|
|
success = FALSE;
|
2016-04-04 18:23:13 +02:00
|
|
|
continue;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
nm_assert(cb_data);
|
|
|
|
|
nm_assert(NM_IS_CONNECTIVITY(cb_data->self));
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
if (cb_data->completed_state != NM_CONNECTIVITY_UNKNOWN) {
|
|
|
|
|
/* callback was already invoked earlier. Nothing to do. */
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
if (msg->data.result != CURLE_OK) {
|
|
|
|
|
cb_data_queue_completed(cb_data,
|
|
|
|
|
NM_CONNECTIVITY_LIMITED,
|
|
|
|
|
NULL,
|
2018-07-03 14:09:01 +02:00
|
|
|
g_strdup_printf("check failed: (%d) %s",
|
|
|
|
|
msg->data.result,
|
|
|
|
|
curl_easy_strerror(msg->data.result)));
|
2019-01-14 14:38:10 +01:00
|
|
|
continue;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:38:10 +01:00
|
|
|
response = _con_config_get_response(cb_data->concheck.con_config);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:38:10 +01:00
|
|
|
if (response[0] == '\0'
|
|
|
|
|
&& (curl_easy_getinfo(msg->easy_handle, CURLINFO_RESPONSE_CODE, &response_code)
|
|
|
|
|
== CURLE_OK)) {
|
|
|
|
|
if (response_code == 204) {
|
|
|
|
|
/* We expected an empty response, and we got a 204 response code (no content).
|
|
|
|
|
* We may or may not have received any content (we would ignore it).
|
|
|
|
|
* Anyway, the response_code 204 means we are good. */
|
|
|
|
|
cb_data_queue_completed(cb_data,
|
|
|
|
|
NM_CONNECTIVITY_FULL,
|
|
|
|
|
"no content, as expected",
|
|
|
|
|
NULL);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
if (response_code == 200 && cb_data->concheck.response_good_cnt == 0) {
|
2019-01-14 14:38:10 +01:00
|
|
|
/* we expected no response, and indeed we got an empty reply (with status code 200) */
|
|
|
|
|
cb_data_queue_completed(cb_data,
|
|
|
|
|
NM_CONNECTIVITY_FULL,
|
|
|
|
|
"empty response, as expected",
|
|
|
|
|
NULL);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:38:10 +01:00
|
|
|
/* If we get here, it means that easy_write_cb() didn't read enough
|
|
|
|
|
* bytes to be able to do a match, or that we were asking for no content
|
|
|
|
|
* (204 response code) and we actually got some. Either way, that is
|
|
|
|
|
* an indication of a captive portal */
|
|
|
|
|
cb_data_queue_completed(cb_data, NM_CONNECTIVITY_PORTAL, "unexpected short response", NULL);
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
2018-07-24 15:24:50 +02:00
|
|
|
|
|
|
|
|
/* if we return a failure, we don't know what went wrong. It's likely serious, because
|
|
|
|
|
* a failure here is not expected. Return FALSE, so that we stop polling the file descriptor.
|
|
|
|
|
* Worst case, this leaves the pending connectivity check unhandled, until our regular
|
|
|
|
|
* time-out kicks in. */
|
|
|
|
|
return success;
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static gboolean
|
2018-07-24 15:02:33 +02:00
|
|
|
_con_curl_timeout_cb(gpointer user_data)
|
2016-04-04 18:23:13 +02:00
|
|
|
{
|
2018-07-03 20:23:06 +02:00
|
|
|
NMConnectivityCheckHandle *cb_data = user_data;
|
2016-04-04 18:23:13 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
_con_curl_check_connectivity(cb_data->concheck.curl_mhandle, CURL_SOCKET_TIMEOUT, 0);
|
|
|
|
|
_complete_queued(cb_data->self);
|
2019-11-13 17:57:38 +01:00
|
|
|
return G_SOURCE_CONTINUE;
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int
|
2019-12-13 16:54:30 +01:00
|
|
|
multi_timer_cb(CURLM *multi, long timeout_msec, void *userdata)
|
2016-04-04 18:23:13 +02:00
|
|
|
{
|
2018-07-03 20:23:06 +02:00
|
|
|
NMConnectivityCheckHandle *cb_data = userdata;
|
2016-04-04 18:23:13 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
nm_clear_g_source(&cb_data->concheck.curl_timer);
|
2019-12-13 16:54:30 +01:00
|
|
|
if (timeout_msec != -1)
|
|
|
|
|
cb_data->concheck.curl_timer = g_timeout_add(timeout_msec, _con_curl_timeout_cb, cb_data);
|
2016-04-04 18:23:13 +02:00
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2018-07-24 15:24:50 +02:00
|
|
|
typedef struct {
|
2018-07-03 20:23:06 +02:00
|
|
|
NMConnectivityCheckHandle *cb_data;
|
2019-11-14 12:30:01 +01:00
|
|
|
|
|
|
|
|
GSource *source;
|
2018-07-24 15:24:50 +02:00
|
|
|
|
|
|
|
|
/* this is a very simplistic weak-pointer. If ConCurlSockData gets
|
|
|
|
|
* destroyed, it will set *destroy_notify to TRUE.
|
|
|
|
|
*
|
|
|
|
|
* _con_curl_socketevent_cb() uses this to detect whether it can
|
|
|
|
|
* safely access @fdp after _con_curl_check_connectivity(). */
|
|
|
|
|
gboolean *destroy_notify;
|
|
|
|
|
|
|
|
|
|
} ConCurlSockData;
|
|
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
static gboolean
|
2019-11-14 12:30:01 +01:00
|
|
|
_con_curl_socketevent_cb(int fd, GIOCondition condition, gpointer user_data)
|
2016-04-04 18:23:13 +02:00
|
|
|
{
|
2021-11-09 13:28:54 +01:00
|
|
|
ConCurlSockData *fdp = user_data;
|
2018-07-03 20:23:06 +02:00
|
|
|
NMConnectivityCheckHandle *cb_data = fdp->cb_data;
|
2016-04-04 18:23:13 +02:00
|
|
|
int action = 0;
|
2018-07-24 15:24:50 +02:00
|
|
|
gboolean fdp_destroyed = FALSE;
|
|
|
|
|
gboolean success;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
if (condition & G_IO_IN)
|
|
|
|
|
action |= CURL_CSELECT_IN;
|
|
|
|
|
if (condition & G_IO_OUT)
|
|
|
|
|
action |= CURL_CSELECT_OUT;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
if (condition & G_IO_ERR)
|
|
|
|
|
action |= CURL_CSELECT_ERR;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-24 15:24:50 +02:00
|
|
|
nm_assert(!fdp->destroy_notify);
|
|
|
|
|
fdp->destroy_notify = &fdp_destroyed;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
success = _con_curl_check_connectivity(cb_data->concheck.curl_mhandle, fd, action);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-24 15:24:50 +02:00
|
|
|
if (fdp_destroyed) {
|
|
|
|
|
/* hups. fdp got invalidated during _con_curl_check_connectivity(). That's fine,
|
|
|
|
|
* just don't touch it. */
|
|
|
|
|
} else {
|
|
|
|
|
nm_assert(fdp->destroy_notify == &fdp_destroyed);
|
|
|
|
|
fdp->destroy_notify = NULL;
|
|
|
|
|
if (!success)
|
2019-11-14 12:30:01 +01:00
|
|
|
nm_clear_g_source_inst(&fdp->source);
|
2018-07-24 15:24:50 +02:00
|
|
|
}
|
|
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
_complete_queued(cb_data->self);
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
|
2019-11-14 12:30:01 +01:00
|
|
|
return G_SOURCE_CONTINUE;
|
2018-07-24 15:24:50 +02:00
|
|
|
}
|
2016-04-04 18:23:13 +02:00
|
|
|
|
|
|
|
|
static int
|
2018-07-24 15:34:10 +02:00
|
|
|
multi_socket_cb(CURL *e_handle, curl_socket_t fd, int what, void *userdata, void *socketp)
|
2016-04-04 18:23:13 +02:00
|
|
|
{
|
2018-07-03 20:23:06 +02:00
|
|
|
NMConnectivityCheckHandle *cb_data = userdata;
|
2021-11-09 13:28:54 +01:00
|
|
|
ConCurlSockData *fdp = socketp;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-24 15:36:19 +02:00
|
|
|
(void) _NM_ENSURE_TYPE(int, fd);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
if (what == CURL_POLL_REMOVE) {
|
|
|
|
|
if (fdp) {
|
2018-07-24 15:24:50 +02:00
|
|
|
if (fdp->destroy_notify)
|
|
|
|
|
*fdp->destroy_notify = TRUE;
|
2019-11-14 12:30:01 +01:00
|
|
|
nm_clear_g_source_inst(&fdp->source);
|
2018-07-03 20:23:06 +02:00
|
|
|
curl_multi_assign(cb_data->concheck.curl_mhandle, fd, NULL);
|
2018-07-24 15:02:33 +02:00
|
|
|
g_slice_free(ConCurlSockData, fdp);
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
|
|
|
|
} else {
|
2019-11-14 12:30:01 +01:00
|
|
|
GIOCondition condition;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
if (!fdp) {
|
2019-11-14 12:30:01 +01:00
|
|
|
fdp = g_slice_new(ConCurlSockData);
|
|
|
|
|
*fdp = (ConCurlSockData){
|
|
|
|
|
.cb_data = cb_data,
|
|
|
|
|
};
|
2018-07-03 20:23:06 +02:00
|
|
|
curl_multi_assign(cb_data->concheck.curl_mhandle, fd, fdp);
|
2016-04-04 18:23:13 +02:00
|
|
|
} else
|
2019-11-14 12:30:01 +01:00
|
|
|
nm_clear_g_source_inst(&fdp->source);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-02-12 19:16:16 +01:00
|
|
|
if (what == CURL_POLL_IN)
|
|
|
|
|
condition = G_IO_IN;
|
|
|
|
|
else if (what == CURL_POLL_OUT)
|
|
|
|
|
condition = G_IO_OUT;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
else if (what == CURL_POLL_INOUT)
|
2018-02-12 19:16:16 +01:00
|
|
|
condition = G_IO_IN | G_IO_OUT;
|
2019-11-14 12:30:01 +01:00
|
|
|
else
|
|
|
|
|
condition = 0;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2021-06-24 11:39:27 +02:00
|
|
|
if (condition)
|
|
|
|
|
fdp->source = nm_g_unix_fd_add_source(fd, condition, _con_curl_socketevent_cb, fdp);
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-03-22 18:48:58 +00:00
|
|
|
return CURLM_OK;
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static size_t
|
|
|
|
|
easy_header_cb(char *buffer, size_t size, size_t nitems, void *userdata)
|
|
|
|
|
{
|
2018-01-05 16:02:05 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data = userdata;
|
2016-04-04 18:23:13 +02:00
|
|
|
size_t len = size * nitems;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
if (cb_data->completed_state != NM_CONNECTIVITY_UNKNOWN) {
|
|
|
|
|
/* already completed. */
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
if (len >= sizeof(HEADER_STATUS_ONLINE) - 1
|
|
|
|
|
&& !g_ascii_strncasecmp(buffer, HEADER_STATUS_ONLINE, sizeof(HEADER_STATUS_ONLINE) - 1)) {
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
cb_data_queue_completed(cb_data, NM_CONNECTIVITY_FULL, "status header found", NULL);
|
2017-03-22 15:28:06 +01:00
|
|
|
return 0;
|
2016-04-04 18:23:13 +02:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
return len;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static size_t
|
|
|
|
|
easy_write_cb(void *buffer, size_t size, size_t nmemb, void *userdata)
|
|
|
|
|
{
|
2018-01-05 16:02:05 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data = userdata;
|
2016-04-04 18:23:13 +02:00
|
|
|
size_t len = size * nmemb;
|
2019-01-14 14:40:12 +01:00
|
|
|
size_t response_len;
|
|
|
|
|
size_t check_len;
|
2021-11-09 13:28:54 +01:00
|
|
|
const char *response;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
if (cb_data->completed_state != NM_CONNECTIVITY_UNKNOWN) {
|
|
|
|
|
/* already completed. */
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:38:10 +01:00
|
|
|
if (len == 0) {
|
|
|
|
|
/* no data. That can happen, it's fine. */
|
|
|
|
|
return len;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2020-09-23 12:43:47 +02:00
|
|
|
response = _con_config_get_response(cb_data->concheck.con_config);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:38:10 +01:00
|
|
|
if (response[0] == '\0') {
|
|
|
|
|
/* no response expected. We are however graceful and accept any
|
|
|
|
|
* extra response that we might receive. We determine the empty
|
|
|
|
|
* response based on the status code 204.
|
|
|
|
|
*
|
|
|
|
|
* Continue receiving... */
|
2019-01-14 14:40:12 +01:00
|
|
|
cb_data->concheck.response_good_cnt += len;
|
|
|
|
|
|
2021-05-03 21:48:58 +02:00
|
|
|
if (cb_data->concheck.response_good_cnt > (gsize) (100 * 1024)) {
|
2019-01-14 14:40:12 +01:00
|
|
|
/* we expect an empty response. We accept either
|
|
|
|
|
* 1) status code 204 and any response
|
2019-03-11 12:00:32 +01:00
|
|
|
* 2) status code 200 and an empty response.
|
2019-01-14 14:40:12 +01:00
|
|
|
*
|
|
|
|
|
* Here, we want to continue receiving data, to see whether we have
|
|
|
|
|
* case 1). Arguably, the server shouldn't send us 204 with a non-empty
|
|
|
|
|
* response, but we accept that also with a non-empty response, so
|
|
|
|
|
* keep receiving.
|
|
|
|
|
*
|
2019-03-11 12:00:32 +01:00
|
|
|
* However, if we get an excessive amount of data, we put a stop on it
|
2019-01-14 14:40:12 +01:00
|
|
|
* and fail. */
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
cb_data_queue_completed(cb_data,
|
|
|
|
|
NM_CONNECTIVITY_PORTAL,
|
2019-01-14 14:40:12 +01:00
|
|
|
"unexpected non-empty response",
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
NULL);
|
2019-01-14 14:40:12 +01:00
|
|
|
return 0;
|
2017-03-22 15:36:50 +01:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
return len;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
nm_assert(cb_data->concheck.response_good_cnt < strlen(response));
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
response_len = strlen(response);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
check_len = NM_MIN(len, response_len - cb_data->concheck.response_good_cnt);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
if (strncmp(&response[cb_data->concheck.response_good_cnt], buffer, check_len) != 0) {
|
|
|
|
|
cb_data_queue_completed(cb_data, NM_CONNECTIVITY_PORTAL, "unexpected response", NULL);
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
cb_data->concheck.response_good_cnt += len;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-01-14 14:40:12 +01:00
|
|
|
if (cb_data->concheck.response_good_cnt >= response_len) {
|
|
|
|
|
/* We already have enough data, and it matched. */
|
|
|
|
|
cb_data_queue_completed(cb_data, NM_CONNECTIVITY_FULL, "expected response", NULL);
|
2017-03-22 15:36:50 +01:00
|
|
|
return 0;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2016-04-04 18:23:13 +02:00
|
|
|
return len;
|
|
|
|
|
}
|
2017-03-22 15:44:13 +01:00
|
|
|
|
|
|
|
|
static gboolean
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
_timeout_cb(gpointer user_data)
|
2017-03-22 15:44:13 +01:00
|
|
|
{
|
2018-01-05 16:02:05 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data = user_data;
|
2017-03-22 15:44:13 +01:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
nm_assert(NM_IS_CONNECTIVITY(cb_data->self));
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
nm_assert(c_list_contains(&NM_CONNECTIVITY_GET_PRIVATE(cb_data->self)->handles_lst_head,
|
|
|
|
|
&cb_data->handles_lst));
|
2017-03-22 15:44:13 +01:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
cb_data_complete(cb_data, NM_CONNECTIVITY_LIMITED, "timeout");
|
2017-03-22 15:44:13 +01:00
|
|
|
return G_SOURCE_REMOVE;
|
|
|
|
|
}
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
#endif
|
2016-04-04 18:23:13 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
static gboolean
|
|
|
|
|
_idle_cb(gpointer user_data)
|
2011-12-13 17:35:36 -06:00
|
|
|
{
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data = user_data;
|
2011-12-14 13:41:57 -06:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
nm_assert(NM_IS_CONNECTIVITY(cb_data->self));
|
|
|
|
|
nm_assert(c_list_contains(&NM_CONNECTIVITY_GET_PRIVATE(cb_data->self)->handles_lst_head,
|
|
|
|
|
&cb_data->handles_lst));
|
2019-06-18 09:30:35 +02:00
|
|
|
nm_assert(cb_data->completed_reason);
|
2011-12-14 13:41:57 -06:00
|
|
|
|
2022-03-17 10:22:55 +01:00
|
|
|
nm_clear_g_source_inst(&cb_data->timeout_source);
|
2019-06-18 09:30:35 +02:00
|
|
|
cb_data_complete(cb_data, cb_data->completed_state, cb_data->completed_reason);
|
2022-03-17 10:22:55 +01:00
|
|
|
return G_SOURCE_CONTINUE;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
|
2019-04-10 21:22:22 +02:00
|
|
|
#if WITH_CONCHECK
|
2017-06-22 16:46:13 +02:00
|
|
|
static void
|
|
|
|
|
do_curl_request(NMConnectivityCheckHandle *cb_data)
|
|
|
|
|
{
|
2018-07-03 20:23:06 +02:00
|
|
|
CURLM *mhandle;
|
2021-11-09 13:28:54 +01:00
|
|
|
CURL *ehandle;
|
2018-07-03 19:20:45 +02:00
|
|
|
long resolve;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
mhandle = curl_multi_init();
|
|
|
|
|
if (!mhandle) {
|
|
|
|
|
cb_data_complete(cb_data, NM_CONNECTIVITY_ERROR, "curl error");
|
|
|
|
|
return;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
ehandle = curl_easy_init();
|
|
|
|
|
if (!ehandle) {
|
2018-07-03 20:23:06 +02:00
|
|
|
curl_multi_cleanup(mhandle);
|
|
|
|
|
cb_data_complete(cb_data, NM_CONNECTIVITY_ERROR, "curl error");
|
2017-06-22 16:46:13 +02:00
|
|
|
return;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
cb_data->concheck.curl_mhandle = mhandle;
|
|
|
|
|
cb_data->concheck.curl_ehandle = ehandle;
|
|
|
|
|
cb_data->concheck.request_headers = curl_slist_append(NULL, "Connection: close");
|
2022-03-17 10:22:55 +01:00
|
|
|
cb_data->timeout_source = nm_g_timeout_add_seconds_source(20, _timeout_cb, cb_data);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
curl_multi_setopt(mhandle, CURLMOPT_SOCKETFUNCTION, multi_socket_cb);
|
|
|
|
|
curl_multi_setopt(mhandle, CURLMOPT_SOCKETDATA, cb_data);
|
|
|
|
|
curl_multi_setopt(mhandle, CURLMOPT_TIMERFUNCTION, multi_timer_cb);
|
|
|
|
|
curl_multi_setopt(mhandle, CURLMOPT_TIMERDATA, cb_data);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 19:20:45 +02:00
|
|
|
switch (cb_data->addr_family) {
|
|
|
|
|
case AF_INET:
|
|
|
|
|
resolve = CURL_IPRESOLVE_V4;
|
|
|
|
|
break;
|
|
|
|
|
case AF_INET6:
|
|
|
|
|
resolve = CURL_IPRESOLVE_V6;
|
|
|
|
|
break;
|
|
|
|
|
case AF_UNSPEC:
|
|
|
|
|
resolve = CURL_IPRESOLVE_WHATEVER;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
resolve = CURL_IPRESOLVE_WHATEVER;
|
|
|
|
|
g_warn_if_reached();
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
curl_easy_setopt(ehandle, CURLOPT_URL, cb_data->concheck.con_config->uri);
|
2017-06-22 16:46:13 +02:00
|
|
|
curl_easy_setopt(ehandle, CURLOPT_WRITEFUNCTION, easy_write_cb);
|
|
|
|
|
curl_easy_setopt(ehandle, CURLOPT_WRITEDATA, cb_data);
|
|
|
|
|
curl_easy_setopt(ehandle, CURLOPT_HEADERFUNCTION, easy_header_cb);
|
|
|
|
|
curl_easy_setopt(ehandle, CURLOPT_HEADERDATA, cb_data);
|
|
|
|
|
curl_easy_setopt(ehandle, CURLOPT_PRIVATE, cb_data);
|
|
|
|
|
curl_easy_setopt(ehandle, CURLOPT_HTTPHEADER, cb_data->concheck.request_headers);
|
|
|
|
|
curl_easy_setopt(ehandle, CURLOPT_INTERFACE, cb_data->ifspec);
|
|
|
|
|
curl_easy_setopt(ehandle, CURLOPT_RESOLVE, cb_data->concheck.hosts);
|
2018-07-03 19:20:45 +02:00
|
|
|
curl_easy_setopt(ehandle, CURLOPT_IPRESOLVE, resolve);
|
2022-02-23 17:51:32 +01:00
|
|
|
curl_easy_setopt(ehandle, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 20:23:06 +02:00
|
|
|
curl_multi_add_handle(mhandle, ehandle);
|
2017-06-22 16:46:13 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
resolve_cb(GObject *object, GAsyncResult *res, gpointer user_data)
|
|
|
|
|
{
|
2018-12-01 14:22:19 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data;
|
2018-12-01 17:17:28 +01:00
|
|
|
gs_unref_variant GVariant *result = NULL;
|
|
|
|
|
gs_unref_variant GVariant *addresses = NULL;
|
2017-06-22 16:46:13 +02:00
|
|
|
gsize no_addresses;
|
|
|
|
|
int ifindex;
|
2018-07-03 19:20:45 +02:00
|
|
|
int addr_family;
|
2017-06-22 16:46:13 +02:00
|
|
|
gsize len = 0;
|
|
|
|
|
gsize i;
|
2021-11-09 13:28:54 +01:00
|
|
|
gs_free_error GError *error = NULL;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-12-01 14:22:19 +01:00
|
|
|
result = g_dbus_connection_call_finish(G_DBUS_CONNECTION(object), res, &error);
|
2017-06-22 16:46:13 +02:00
|
|
|
if (g_error_matches(error, G_IO_ERROR, G_IO_ERROR_CANCELLED))
|
|
|
|
|
return;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-12-01 14:22:19 +01:00
|
|
|
cb_data = user_data;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-12-01 14:22:19 +01:00
|
|
|
g_clear_object(&cb_data->concheck.resolve_cancellable);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
if (!result) {
|
|
|
|
|
/* Never mind. Just let do curl do its own resolving. */
|
|
|
|
|
_LOG2D("can't resolve a name via systemd-resolved: %s", error->message);
|
|
|
|
|
do_curl_request(cb_data);
|
|
|
|
|
return;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
addresses = g_variant_get_child_value(result, 0);
|
|
|
|
|
no_addresses = g_variant_n_children(addresses);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
for (i = 0; i < no_addresses; i++) {
|
2018-12-01 17:17:28 +01:00
|
|
|
gs_unref_variant GVariant *address = NULL;
|
|
|
|
|
char str_addr[NM_UTILS_INET_ADDRSTRLEN];
|
2021-11-09 13:28:54 +01:00
|
|
|
gs_free char *host_entry = NULL;
|
|
|
|
|
const guchar *address_buf;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-07-03 19:20:45 +02:00
|
|
|
g_variant_get_child(addresses, i, "(ii@ay)", &ifindex, &addr_family, &address);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2022-03-17 10:06:21 +01:00
|
|
|
if (!NM_IN_SET(addr_family, AF_INET, AF_INET6))
|
|
|
|
|
continue;
|
|
|
|
|
|
2018-12-01 17:17:28 +01:00
|
|
|
if (cb_data->addr_family != AF_UNSPEC && cb_data->addr_family != addr_family)
|
|
|
|
|
continue;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-12-01 17:17:28 +01:00
|
|
|
address_buf = g_variant_get_fixed_array(address, &len, 1);
|
2022-03-17 10:10:04 +01:00
|
|
|
if (len != nm_utils_addr_family_to_size(addr_family))
|
2018-12-01 17:17:28 +01:00
|
|
|
continue;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2022-03-17 10:10:04 +01:00
|
|
|
host_entry = g_strdup_printf("%s:%s:%s",
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
cb_data->concheck.con_config->host,
|
|
|
|
|
cb_data->concheck.con_config->port ?: "80",
|
2018-12-01 17:17:28 +01:00
|
|
|
nm_utils_inet_ntop(addr_family, address_buf, str_addr));
|
2022-03-17 10:10:04 +01:00
|
|
|
|
2018-12-01 17:17:28 +01:00
|
|
|
cb_data->concheck.hosts = curl_slist_append(cb_data->concheck.hosts, host_entry);
|
|
|
|
|
_LOG2T("adding '%s' to curl resolve list", host_entry);
|
2017-06-22 16:46:13 +02:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
do_curl_request(cb_data);
|
|
|
|
|
}
|
2019-04-10 21:22:22 +02:00
|
|
|
#endif
|
2017-06-22 16:46:13 +02:00
|
|
|
|
2021-05-03 21:48:58 +02:00
|
|
|
#define SD_RESOLVED_DNS ((guint64) (1LL << 0))
|
2017-06-22 16:46:13 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
static NMConnectivityState
|
|
|
|
|
check_platform_config(NMConnectivity *self,
|
2021-11-09 13:28:54 +01:00
|
|
|
NMPlatform *platform,
|
2019-06-17 15:31:59 +02:00
|
|
|
int ifindex,
|
|
|
|
|
int addr_family,
|
2021-11-09 13:28:54 +01:00
|
|
|
const char **reason)
|
2019-06-17 15:31:59 +02:00
|
|
|
{
|
|
|
|
|
const NMDedupMultiHeadEntry *addresses;
|
|
|
|
|
const NMDedupMultiHeadEntry *routes;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
if (!nm_platform_link_is_connected(platform, ifindex)) {
|
|
|
|
|
NM_SET_OUT(reason, "no carrier");
|
|
|
|
|
return NM_CONNECTIVITY_NONE;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
addresses = nm_platform_lookup_object(platform,
|
|
|
|
|
addr_family == AF_INET ? NMP_OBJECT_TYPE_IP4_ADDRESS
|
|
|
|
|
: NMP_OBJECT_TYPE_IP6_ADDRESS,
|
|
|
|
|
ifindex);
|
|
|
|
|
if (!addresses || addresses->len == 0) {
|
|
|
|
|
NM_SET_OUT(reason, "no IP address configured");
|
|
|
|
|
return NM_CONNECTIVITY_NONE;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
routes = nm_platform_lookup_object(platform,
|
|
|
|
|
addr_family == AF_INET ? NMP_OBJECT_TYPE_IP4_ROUTE
|
|
|
|
|
: NMP_OBJECT_TYPE_IP6_ROUTE,
|
|
|
|
|
ifindex);
|
|
|
|
|
if (!routes || routes->len == 0) {
|
|
|
|
|
NM_SET_OUT(reason, "no IP route configured");
|
|
|
|
|
return NM_CONNECTIVITY_NONE;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
switch (addr_family) {
|
|
|
|
|
case AF_INET:
|
|
|
|
|
{
|
|
|
|
|
const NMPlatformIP4Route *route;
|
|
|
|
|
gboolean found_global = FALSE;
|
|
|
|
|
NMDedupMultiIter iter;
|
2021-11-09 13:28:54 +01:00
|
|
|
const NMPObject *plobj;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
/* For IPv4 also require a route with global scope. */
|
|
|
|
|
nmp_cache_iter_for_each (&iter, routes, &plobj) {
|
|
|
|
|
route = NMP_OBJECT_CAST_IP4_ROUTE(plobj);
|
|
|
|
|
if (nm_platform_route_scope_inv(route->scope_inv) == RT_SCOPE_UNIVERSE) {
|
|
|
|
|
found_global = TRUE;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
if (!found_global) {
|
|
|
|
|
NM_SET_OUT(reason, "no global route configured");
|
|
|
|
|
return NM_CONNECTIVITY_LIMITED;
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
case AF_INET6:
|
|
|
|
|
/* Route scopes aren't meaningful for IPv6 so any route is fine. */
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
g_return_val_if_reached(FALSE);
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-17 15:31:59 +02:00
|
|
|
NM_SET_OUT(reason, NULL);
|
|
|
|
|
return NM_CONNECTIVITY_UNKNOWN;
|
|
|
|
|
}
|
|
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
NMConnectivityCheckHandle *
|
2021-11-09 13:28:54 +01:00
|
|
|
nm_connectivity_check_start(NMConnectivity *self,
|
2018-07-03 19:20:45 +02:00
|
|
|
int addr_family,
|
2021-11-09 13:28:54 +01:00
|
|
|
NMPlatform *platform,
|
2017-06-22 16:46:13 +02:00
|
|
|
int ifindex,
|
2021-11-09 13:28:54 +01:00
|
|
|
const char *iface,
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
NMConnectivityCheckCallback callback,
|
|
|
|
|
gpointer user_data)
|
|
|
|
|
{
|
2021-11-09 13:28:54 +01:00
|
|
|
NMConnectivityPrivate *priv;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data;
|
2018-12-01 15:01:17 +01:00
|
|
|
static guint64 request_counter = 0;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
g_return_val_if_fail(NM_IS_CONNECTIVITY(self), NULL);
|
|
|
|
|
g_return_val_if_fail(callback, NULL);
|
2019-06-18 09:36:41 +02:00
|
|
|
nm_assert(!platform || NM_IS_PLATFORM(platform));
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
priv = NM_CONNECTIVITY_GET_PRIVATE(self);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
cb_data = g_slice_new0(NMConnectivityCheckHandle);
|
|
|
|
|
cb_data->self = self;
|
2018-12-01 15:01:17 +01:00
|
|
|
cb_data->request_counter = ++request_counter;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
c_list_link_tail(&priv->handles_lst_head, &cb_data->handles_lst);
|
|
|
|
|
cb_data->callback = callback;
|
|
|
|
|
cb_data->user_data = user_data;
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
cb_data->completed_state = NM_CONNECTIVITY_UNKNOWN;
|
2018-07-03 19:20:45 +02:00
|
|
|
cb_data->addr_family = addr_family;
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
if (iface)
|
2017-03-20 13:36:00 +00:00
|
|
|
cb_data->ifspec = g_strdup_printf("if!%s", iface);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
|
|
|
|
#if WITH_CONCHECK
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
|
2019-04-10 21:22:22 +02:00
|
|
|
cb_data->concheck.con_config = _con_config_ref(priv->con_config);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
if (iface && ifindex > 0 && priv->enabled && priv->uri_valid) {
|
2018-12-01 15:48:19 +01:00
|
|
|
gboolean has_systemd_resolved;
|
2019-06-17 15:31:59 +02:00
|
|
|
NMConnectivityState state;
|
2021-11-09 13:28:54 +01:00
|
|
|
const char *reason;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
cb_data->concheck.ch_ifindex = ifindex;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2019-06-18 09:36:41 +02:00
|
|
|
if (platform) {
|
|
|
|
|
state = check_platform_config(self, platform, ifindex, addr_family, &reason);
|
|
|
|
|
nm_assert((state == NM_CONNECTIVITY_UNKNOWN) == !reason);
|
|
|
|
|
if (state != NM_CONNECTIVITY_UNKNOWN) {
|
|
|
|
|
_LOG2D("skip connectivity check due to %s", reason);
|
|
|
|
|
cb_data->completed_state = state;
|
|
|
|
|
cb_data->completed_reason = reason;
|
2022-03-17 10:22:55 +01:00
|
|
|
cb_data->timeout_source = nm_g_idle_add_source(_idle_cb, cb_data);
|
2019-06-18 09:36:41 +02:00
|
|
|
return cb_data;
|
2020-09-28 16:03:33 +02:00
|
|
|
}
|
2019-06-18 09:36:41 +02:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-12-01 15:48:19 +01:00
|
|
|
/* note that we pick up support for systemd-resolved right away when we need it.
|
|
|
|
|
* We don't need to remember the setting, because we can (cheaply) check anew
|
|
|
|
|
* on each request.
|
|
|
|
|
*
|
|
|
|
|
* Yes, this makes NMConnectivity singleton dependent on NMDnsManager singleton.
|
|
|
|
|
* Well, not really: it makes connectivity-check-start dependent on NMDnsManager
|
2019-04-07 11:34:30 +02:00
|
|
|
* which merely means, not to start a connectivity check, late during shutdown.
|
|
|
|
|
*
|
|
|
|
|
* NMDnsSystemdResolved tries to D-Bus activate systemd-resolved only once,
|
|
|
|
|
* to not spam syslog with failures messages from dbus-daemon.
|
|
|
|
|
* Note that unless NMDnsSystemdResolved tried and failed to start systemd-resolved,
|
|
|
|
|
* it guesses that systemd-resolved is activatable and returns %TRUE here. That
|
|
|
|
|
* means, while NMDnsSystemdResolved would not try to D-Bus activate systemd-resolved
|
|
|
|
|
* more than once, NMConnectivity might -- until NMDnsSystemdResolved tried itself
|
|
|
|
|
* and noticed that systemd-resolved is not available.
|
|
|
|
|
* This is relatively cumbersome to avoid, because we would have to go through
|
|
|
|
|
* NMDnsSystemdResolved trying to asynchronously start the service, to ensure there
|
|
|
|
|
* is only one attempt to start the service. */
|
2021-05-28 11:30:42 +02:00
|
|
|
has_systemd_resolved = !!nm_dns_manager_get_systemd_resolved(nm_dns_manager_get());
|
2018-12-01 15:48:19 +01:00
|
|
|
|
|
|
|
|
if (has_systemd_resolved) {
|
|
|
|
|
GDBusConnection *dbus_connection;
|
|
|
|
|
|
2019-05-05 09:37:37 +02:00
|
|
|
dbus_connection = NM_MAIN_DBUS_CONNECTION_GET;
|
2018-12-01 15:48:19 +01:00
|
|
|
if (!dbus_connection) {
|
|
|
|
|
/* we have no D-Bus connection? That might happen in configure and quit mode.
|
|
|
|
|
*
|
|
|
|
|
* Anyway, something is very odd, just fail connectivity check. */
|
|
|
|
|
_LOG2D("start fake request (fail due to no D-Bus connection)");
|
2019-06-18 09:30:35 +02:00
|
|
|
cb_data->completed_state = NM_CONNECTIVITY_ERROR;
|
|
|
|
|
cb_data->completed_reason = "no D-Bus connection";
|
2022-03-17 10:22:55 +01:00
|
|
|
cb_data->timeout_source = nm_g_idle_add_source(_idle_cb, cb_data);
|
2018-12-01 15:48:19 +01:00
|
|
|
return cb_data;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2018-12-01 15:48:19 +01:00
|
|
|
cb_data->concheck.resolve_cancellable = g_cancellable_new();
|
2020-09-28 16:03:33 +02:00
|
|
|
|
core/dbus: let NMDBusManager create a D-Bus connection in "configure-and-quit=true" mode
Note that various components (NMFirewallManager, NMAuthManager,
NMConnectivity, etc.pp) all request their own GDBusConnection from
glib's G_BUS_TYPE_SYSTEM singleton.
In the future, let them instead use the D-Bus connection that
NMDBusManager already has.
- NMDBusManager also uses g_bus_get(G_BUS_TYPE_SYSTEM), so in practice this
is just the same GDBusConnection instance.
- if it would not be the same GDBusConnection instance, it would
be more correct/logical to use the one that NMDBusManager uses.
- NMDBusManager already aquired the GDBusConnection synchronously
and it's ready for use. On the other hand, g_bus_get()/g_bus_get_sync()
has the notion that getting the singleton cannot be done without
waiting/blocking. So at least it involves locking or even dispatching
the async reply on D-Bus.
- in "configure-and-quit=initrd" we really don't have D-Bus available.
NMDBusManager should control whether the other components use D-Bus
or not. For example, NMFirewallManager should not ask glib for a
G_BUS_TYPE_SYSTEM singleton only to later find out that it doesn't work.
So if these components would reuse NMDBusManager's GDBusConnection,
then it must have the connection also in regular "configure-and-quit=true"
mode. In this case, we are in late boot and want do connectivity
checking and talk to firewalld.
2019-05-04 13:43:19 +02:00
|
|
|
g_dbus_connection_call(dbus_connection,
|
2018-12-01 15:48:19 +01:00
|
|
|
"org.freedesktop.resolve1",
|
|
|
|
|
"/org/freedesktop/resolve1",
|
|
|
|
|
"org.freedesktop.resolve1.Manager",
|
|
|
|
|
"ResolveHostname",
|
|
|
|
|
g_variant_new("(isit)",
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
(gint32) cb_data->concheck.ch_ifindex,
|
|
|
|
|
cb_data->concheck.con_config->host,
|
2018-12-01 15:48:19 +01:00
|
|
|
(gint32) cb_data->addr_family,
|
|
|
|
|
SD_RESOLVED_DNS),
|
|
|
|
|
G_VARIANT_TYPE("(a(iiay)st)"),
|
|
|
|
|
G_DBUS_CALL_FLAGS_NONE,
|
|
|
|
|
-1,
|
|
|
|
|
cb_data->concheck.resolve_cancellable,
|
|
|
|
|
resolve_cb,
|
|
|
|
|
cb_data);
|
|
|
|
|
_LOG2D("start request to '%s' (try resolving '%s' using systemd-resolved)",
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
cb_data->concheck.con_config->uri,
|
|
|
|
|
cb_data->concheck.con_config->host);
|
2018-12-01 15:48:19 +01:00
|
|
|
} else {
|
|
|
|
|
_LOG2D("start request to '%s' (systemd-resolved not available)",
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
cb_data->concheck.con_config->uri);
|
2018-12-01 15:48:19 +01:00
|
|
|
do_curl_request(cb_data);
|
2018-12-01 14:22:19 +01:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
return cb_data;
|
2013-07-31 09:14:39 -04:00
|
|
|
}
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
#endif
|
2013-07-31 09:14:39 -04:00
|
|
|
|
2019-06-18 09:30:35 +02:00
|
|
|
if (!cb_data->ifspec) {
|
|
|
|
|
cb_data->completed_state = NM_CONNECTIVITY_ERROR;
|
|
|
|
|
cb_data->completed_reason = "missing interface";
|
|
|
|
|
} else {
|
|
|
|
|
cb_data->completed_state = NM_CONNECTIVITY_FAKE;
|
|
|
|
|
cb_data->completed_reason = "fake result";
|
|
|
|
|
}
|
|
|
|
|
_LOG2D("start fake request (%s)", cb_data->completed_reason);
|
2022-03-17 10:22:55 +01:00
|
|
|
cb_data->timeout_source = nm_g_idle_add_source(_idle_cb, cb_data);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
return cb_data;
|
2011-12-13 17:35:36 -06:00
|
|
|
}
|
2011-10-21 21:21:30 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
void
|
|
|
|
|
nm_connectivity_check_cancel(NMConnectivityCheckHandle *cb_data)
|
2013-07-31 09:14:39 -04:00
|
|
|
{
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
g_return_if_fail(cb_data);
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
g_return_if_fail(NM_IS_CONNECTIVITY(cb_data->self));
|
2013-07-31 09:14:39 -04:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
nm_assert(
|
|
|
|
|
c_list_contains(&NM_CONNECTIVITY_GET_PRIVATE(cb_data->self)->handles_lst_head,
|
|
|
|
|
&cb_data->handles_lst)
|
|
|
|
|
|| c_list_contains(&NM_CONNECTIVITY_GET_PRIVATE(cb_data->self)->completed_handles_lst_head,
|
|
|
|
|
&cb_data->handles_lst));
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
cb_data_complete(cb_data, NM_CONNECTIVITY_CANCELLED, "cancelled");
|
2013-07-31 09:14:39 -04:00
|
|
|
}
|
|
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2017-05-03 18:05:44 +02:00
|
|
|
gboolean
|
|
|
|
|
nm_connectivity_check_enabled(NMConnectivity *self)
|
|
|
|
|
{
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
g_return_val_if_fail(NM_IS_CONNECTIVITY(self), FALSE);
|
2017-05-03 18:05:44 +02:00
|
|
|
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
return NM_CONNECTIVITY_GET_PRIVATE(self)->enabled;
|
2017-05-03 18:05:44 +02:00
|
|
|
}
|
|
|
|
|
|
2016-10-02 18:22:50 +02:00
|
|
|
/*****************************************************************************/
|
2013-07-31 09:14:39 -04:00
|
|
|
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
guint
|
|
|
|
|
nm_connectivity_get_interval(NMConnectivity *self)
|
2011-10-21 21:21:30 +02:00
|
|
|
{
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
return nm_connectivity_check_enabled(self) ? NM_CONNECTIVITY_GET_PRIVATE(self)->interval : 0;
|
2011-12-13 17:35:36 -06:00
|
|
|
}
|
2011-10-21 21:21:30 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
static gboolean
|
|
|
|
|
host_and_port_from_uri(const char *uri, char **host, char **port)
|
|
|
|
|
{
|
|
|
|
|
const char *p = uri;
|
|
|
|
|
const char *host_begin = NULL;
|
|
|
|
|
size_t host_len = 0;
|
|
|
|
|
const char *port_begin = NULL;
|
|
|
|
|
size_t port_len = 0;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
/* scheme */
|
|
|
|
|
while (*p != ':' && *p != '/') {
|
|
|
|
|
if (!*p++)
|
|
|
|
|
return FALSE;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
/* :// */
|
|
|
|
|
if (*p++ != ':')
|
|
|
|
|
return FALSE;
|
|
|
|
|
if (*p++ != '/')
|
|
|
|
|
return FALSE;
|
|
|
|
|
if (*p++ != '/')
|
|
|
|
|
return FALSE;
|
|
|
|
|
/* host */
|
|
|
|
|
if (*p == '[')
|
|
|
|
|
return FALSE;
|
|
|
|
|
host_begin = p;
|
|
|
|
|
while (*p && *p != ':' && *p != '/') {
|
|
|
|
|
host_len++;
|
|
|
|
|
p++;
|
|
|
|
|
}
|
|
|
|
|
if (host_len == 0)
|
|
|
|
|
return FALSE;
|
2018-12-01 17:17:28 +01:00
|
|
|
*host = g_strndup(host_begin, host_len);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
/* port */
|
|
|
|
|
if (*p++ == ':') {
|
|
|
|
|
port_begin = p;
|
|
|
|
|
while (*p && *p != '/') {
|
|
|
|
|
port_len++;
|
|
|
|
|
p++;
|
|
|
|
|
}
|
|
|
|
|
if (port_len)
|
2018-12-01 17:17:28 +01:00
|
|
|
*port = g_strndup(port_begin, port_len);
|
2017-06-22 16:46:13 +02:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-06-22 16:46:13 +02:00
|
|
|
return TRUE;
|
|
|
|
|
}
|
|
|
|
|
|
2011-10-21 21:21:30 +02:00
|
|
|
static void
|
2017-03-20 13:36:00 +00:00
|
|
|
update_config(NMConnectivity *self, NMConfigData *config_data)
|
2011-10-21 21:21:30 +02:00
|
|
|
{
|
|
|
|
|
NMConnectivityPrivate *priv = NM_CONNECTIVITY_GET_PRIVATE(self);
|
2014-07-14 13:40:30 +02:00
|
|
|
guint interval;
|
2017-08-09 15:19:54 +08:00
|
|
|
gboolean enabled;
|
2017-03-20 13:36:00 +00:00
|
|
|
gboolean changed = FALSE;
|
2021-11-09 13:28:54 +01:00
|
|
|
const char *cur_uri = priv->con_config ? priv->con_config->uri : NULL;
|
|
|
|
|
const char *cur_response = priv->con_config ? priv->con_config->response : NULL;
|
|
|
|
|
const char *new_response;
|
|
|
|
|
const char *new_uri;
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
gboolean new_uri_valid = priv->uri_valid;
|
|
|
|
|
gboolean new_host_port = FALSE;
|
2021-11-09 13:28:54 +01:00
|
|
|
gs_free char *new_host = NULL;
|
|
|
|
|
gs_free char *new_port = NULL;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
new_uri = nm_config_data_get_connectivity_uri(config_data);
|
|
|
|
|
if (!nm_streq0(new_uri, cur_uri)) {
|
|
|
|
|
new_uri_valid = (new_uri && *new_uri);
|
|
|
|
|
if (new_uri_valid) {
|
|
|
|
|
gs_free char *scheme = g_uri_parse_scheme(new_uri);
|
2018-12-02 00:06:54 +01:00
|
|
|
gboolean is_https = FALSE;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
if (!scheme) {
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
_LOGE("invalid URI '%s' for connectivity check.", new_uri);
|
|
|
|
|
new_uri_valid = FALSE;
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
} else if (g_ascii_strcasecmp(scheme, "https") == 0) {
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
_LOGW("use of HTTPS for connectivity checking is not reliable and is discouraged "
|
|
|
|
|
"(URI: %s)",
|
|
|
|
|
new_uri);
|
2018-12-02 00:06:54 +01:00
|
|
|
is_https = TRUE;
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
} else if (g_ascii_strcasecmp(scheme, "http") != 0) {
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
_LOGE("scheme of '%s' uri doesn't use a scheme that is allowed for connectivity "
|
|
|
|
|
"check.",
|
|
|
|
|
new_uri);
|
|
|
|
|
new_uri_valid = FALSE;
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
}
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
if (new_uri_valid) {
|
|
|
|
|
new_host_port = TRUE;
|
|
|
|
|
if (!host_and_port_from_uri(new_uri, &new_host, &new_port)) {
|
|
|
|
|
_LOGE("cannot parse host and port from '%s'", new_uri);
|
|
|
|
|
new_uri_valid = FALSE;
|
2018-12-02 00:06:54 +01:00
|
|
|
} else if (!new_port && is_https)
|
|
|
|
|
new_port = g_strdup("443");
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
}
|
|
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
if (new_uri_valid || priv->uri_valid != new_uri_valid)
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
changed = TRUE;
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
new_response = nm_config_data_get_connectivity_response(config_data);
|
|
|
|
|
if (!nm_streq0(new_response, cur_response))
|
|
|
|
|
changed = TRUE;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
if (!priv->con_config || !nm_streq0(new_uri, priv->con_config->uri)
|
|
|
|
|
|| !nm_streq0(new_response, priv->con_config->response)) {
|
|
|
|
|
if (!new_host_port) {
|
|
|
|
|
new_host = priv->con_config ? g_strdup(priv->con_config->host) : NULL;
|
|
|
|
|
new_port = priv->con_config ? g_strdup(priv->con_config->port) : NULL;
|
|
|
|
|
}
|
|
|
|
|
_con_config_unref(priv->con_config);
|
|
|
|
|
priv->con_config = g_slice_new(ConConfig);
|
|
|
|
|
*priv->con_config = (ConConfig){
|
|
|
|
|
.ref_count = 1,
|
|
|
|
|
.uri = g_strdup(new_uri),
|
|
|
|
|
.response = g_strdup(new_response),
|
|
|
|
|
.host = g_steal_pointer(&new_host),
|
|
|
|
|
.port = g_steal_pointer(&new_port),
|
|
|
|
|
};
|
2011-10-21 21:21:30 +02:00
|
|
|
}
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
priv->uri_valid = new_uri_valid;
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-03-20 13:36:00 +00:00
|
|
|
interval = nm_config_data_get_connectivity_interval(config_data);
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
interval = MIN(interval, (7 * 24 * 3600));
|
2017-03-20 13:36:00 +00:00
|
|
|
if (priv->interval != interval) {
|
|
|
|
|
priv->interval = interval;
|
|
|
|
|
changed = TRUE;
|
|
|
|
|
}
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
enabled = FALSE;
|
|
|
|
|
#if WITH_CONCHECK
|
connectivity: improve warning about URI config change
During a config change notification, we determine a "changed" value,
to know whether things significantly changed.
Also, we want to log a warning about invalid configuration,
only when the config actually changed. Previously, when the URI was
invalid, on every reload (SIGHUP) we would log an error message,
even if the configuration did not change. There is no need to
warn multiple times about the same thing.
Keep track of the original URI in priv->uri. Whenever that changed,
we know the user reconfigured something. But also, now the URI might
be set to an invalid value. That means, we need to remember whether
the URI is valid.
Also, log a warning if we fail to parse the host and port. Already
before, such an URI was considered invalid and we would effectively
not to connectivity checking.
2018-12-01 18:01:20 +01:00
|
|
|
if (priv->uri_valid && priv->interval)
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
enabled = nm_config_data_get_connectivity_enabled(config_data);
|
|
|
|
|
#endif
|
|
|
|
|
|
2017-08-09 15:19:54 +08:00
|
|
|
if (priv->enabled != enabled) {
|
|
|
|
|
priv->enabled = enabled;
|
|
|
|
|
changed = TRUE;
|
|
|
|
|
}
|
|
|
|
|
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
if (changed)
|
|
|
|
|
g_signal_emit(self, signals[CONFIG_CHANGED], 0);
|
2017-03-20 13:36:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
2021-11-09 13:28:54 +01:00
|
|
|
config_changed_cb(NMConfig *config,
|
|
|
|
|
NMConfigData *config_data,
|
2017-03-20 13:36:00 +00:00
|
|
|
NMConfigChangeFlags changes,
|
2021-11-09 13:28:54 +01:00
|
|
|
NMConfigData *old_data,
|
|
|
|
|
NMConnectivity *self)
|
2017-03-20 13:36:00 +00:00
|
|
|
{
|
|
|
|
|
update_config(self, config_data);
|
|
|
|
|
}
|
2016-04-04 18:23:13 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
/*****************************************************************************/
|
|
|
|
|
|
2011-10-21 21:21:30 +02:00
|
|
|
static void
|
|
|
|
|
nm_connectivity_init(NMConnectivity *self)
|
|
|
|
|
{
|
|
|
|
|
NMConnectivityPrivate *priv = NM_CONNECTIVITY_GET_PRIVATE(self);
|
2018-07-03 14:09:01 +02:00
|
|
|
#if WITH_CONCHECK
|
|
|
|
|
CURLcode ret;
|
|
|
|
|
#endif
|
2016-04-04 18:23:13 +02:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
c_list_init(&priv->handles_lst_head);
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
c_list_init(&priv->completed_handles_lst_head);
|
2016-04-04 18:23:13 +02:00
|
|
|
|
2017-06-02 19:11:11 +02:00
|
|
|
priv->config = g_object_ref(nm_config_get());
|
|
|
|
|
g_signal_connect(G_OBJECT(priv->config),
|
|
|
|
|
NM_CONFIG_SIGNAL_CONFIG_CHANGED,
|
|
|
|
|
G_CALLBACK(config_changed_cb),
|
|
|
|
|
self);
|
|
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
#if WITH_CONCHECK
|
2018-07-03 14:09:01 +02:00
|
|
|
ret = curl_global_init(CURL_GLOBAL_ALL);
|
2018-09-24 15:52:07 +02:00
|
|
|
if (ret != CURLE_OK) {
|
2019-02-22 16:27:50 +01:00
|
|
|
_LOGE("unable to init cURL, connectivity check will not work: (%d) %s",
|
|
|
|
|
ret,
|
|
|
|
|
curl_easy_strerror(ret));
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
}
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
|
update_config(self, nm_config_get_data(priv->config));
|
2016-09-29 13:49:01 +02:00
|
|
|
}
|
2011-10-21 21:21:30 +02:00
|
|
|
|
|
|
|
|
static void
|
2011-12-05 14:48:47 -06:00
|
|
|
dispose(GObject *object)
|
2011-10-21 21:21:30 +02:00
|
|
|
{
|
2021-11-09 13:28:54 +01:00
|
|
|
NMConnectivity *self = NM_CONNECTIVITY(object);
|
|
|
|
|
NMConnectivityPrivate *priv = NM_CONNECTIVITY_GET_PRIVATE(self);
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
NMConnectivityCheckHandle *cb_data;
|
connectivity: fix crash when removing easy-handle from curl callback
libcurl does not allow removing easy-handles from within a curl
callback.
That was already partly avoided for one handle alone. That is, when
a handle completed inside a libcurl callback, it would only invoke the
callback, but not yet delete it. However, that is not enough, because
from within a callback another handle can be cancelled, leading to
the removal of (the other) handle and a crash:
==24572== at 0x40319AB: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24572== by 0x52DDAE5: Curl_close (url.c:392)
==24572== by 0x52EC02C: curl_easy_cleanup (easy.c:825)
==24572== by 0x5FDCD2: cb_data_free (nm-connectivity.c:215)
==24572== by 0x5FF6DE: nm_connectivity_check_cancel (nm-connectivity.c:585)
==24572== by 0x55F7F9: concheck_handle_complete (nm-device.c:2601)
==24572== by 0x574C12: concheck_cb (nm-device.c:2725)
==24572== by 0x5FD887: cb_data_invoke_callback (nm-connectivity.c:167)
==24572== by 0x5FD959: easy_header_cb (nm-connectivity.c:435)
==24572== by 0x52D73CB: chop_write (sendf.c:612)
==24572== by 0x52D73CB: Curl_client_write (sendf.c:668)
==24572== by 0x52D54ED: Curl_http_readwrite_headers (http.c:3904)
==24572== by 0x52E9EA7: readwrite_data (transfer.c:548)
==24572== by 0x52E9EA7: Curl_readwrite (transfer.c:1161)
==24572== by 0x52F4193: multi_runsingle (multi.c:1915)
==24572== by 0x52F5531: multi_socket (multi.c:2607)
==24572== by 0x52F5804: curl_multi_socket_action (multi.c:2771)
Fix that, by never invoking any callbacks when we are inside a libcurl
callback. Instead, the handle is marked for completion and queued. Later,
we complete all queue handles separately.
While at it, drop the @error argument from NMConnectivityCheckCallback.
It was only used to signal cancellation. Let's instead signal that via
status NM_CONNECTIVITY_CANCELLED.
https://bugzilla.gnome.org/show_bug.cgi?id=797136
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1792745
https://bugzilla.opensuse.org/show_bug.cgi?id=1107197
https://github.com/NetworkManager/NetworkManager/pull/207
Fixes: d8a31794c8b9db243076ba0c24dfe6e496b78697
2018-09-17 11:08:15 +02:00
|
|
|
|
|
|
|
|
nm_assert(c_list_is_empty(&priv->completed_handles_lst_head));
|
|
|
|
|
|
|
|
|
|
while (
|
|
|
|
|
(cb_data =
|
|
|
|
|
c_list_first_entry(&priv->handles_lst_head, NMConnectivityCheckHandle, handles_lst)))
|
|
|
|
|
cb_data_complete(cb_data, NM_CONNECTIVITY_DISPOSING, "shutting down");
|
2011-10-21 21:21:30 +02:00
|
|
|
|
connectivity: ensure uri and response stays alive during connectivity check
The settings of NMConnectivity can change any time, by reloading the
configuration.
When reloading the configration, we don't want to interrupt or cancel
the pending reuqests, they should just complete with the old settings with
which they started. Note, that NMDevice is smart enough, that when a
newer request completes earlier, it invalidates all older, still pending
requests.
Anyway, that means, we cannot rely on the value to stay alive. Fix that,
by adding adding a new ref-counted struct for these parameters.
Fixes: 2cec94bacce4a09c0e5ffa241b8d50fd4702dddc
2018-12-01 18:30:41 +01:00
|
|
|
nm_clear_pointer(&priv->con_config, _con_config_unref);
|
2013-07-29 16:36:00 -04:00
|
|
|
|
connectivity: rework async connectivity check requests
An asynchronous request should either be cancellable or not keep
the target object alive. Preferably both.
Otherwise, it is impossible to do a controlled shutdown when terminating
NetworkManager. Currently, when NetworkManager is about to terminate,
it just quits the mainloop and essentially leaks everything. That is a
bug. If we ever want to fix that, every asynchronous request must be
cancellable in a controlled way (or it must not prevent objects from
getting disposed, where disposing the object automatically cancels the
callback).
Rework the asynchronous request for connectivity check to
- return a handle that can be used to cancel the operation.
Cancelling is optional. The caller may choose to ignore the handle
because the asynchronous operation does not keep the target object
alive. That means, it is still possible to shutdown, by everybody
giving up their reference to the target object. In which case the
callback will be invoked during dispose() of the target object.
- also, the callback will always be invoked exactly once, and never
synchronously from within the asynchronous start call. But during
cancel(), the callback is invoked synchronously from within cancel().
Note that it's only allowed to cancel an action at most once, and
never after the callback is invoked (also not from within the callback
itself).
- also, NMConnectivity already supports a fake handler, in case
connectivity check is disabled via configuration. Hence, reuse
the same code paths also when compiling without --enable-concheck.
That means, instead of having #if WITH_CONCHECK at various callers,
move them into NMConnectivity. The downside is, that if you build
without concheck, there is a small overhead compared to before. The
upside is, we reuse the same code paths when compiling with or without
concheck.
- also, the patch synchronizes the connecitivty states. For example,
previously `nmcli networking connectivity check` would schedule
requests in parallel, and return the accumulated result of the individual
requests.
However, the global connectivity state of the manager might have have
been the same as the answer to the explicit connecitivity check,
because while the answer for the manual check is waiting for all
pending checks to complete, the global connectivity state could
already change. That is just wrong. There are not multiple global
connectivity states at the same time, there is just one. A manual
connectivity check should have the meaning of ensure that the global
state is up to date, but it still should return the global
connectivity state -- not the answers for several connectivity checks
issued in parallel.
This is related to commit b799de281bc01073c31dd2c86171b29c8132441c
(libnm: update property in the manager after connectivity check),
which tries to address a similar problem client side.
Similarly, each device has a connectivity state. While there might
be several connectivity checks per device pending, whenever a check
completes, it can update the per-device state (and return that device
state as result), but the immediate answer of the individual check
might not matter. This is especially the case, when a later request
returns earlier and obsoletes all earlier requests. In that case,
earlier requests return with the result of the currend devices
connectivity state.
This patch cleans up the internal API and gives a better defined behavior
to the user (thus, the simple API which simplifies implementation for the
caller). However, the implementation of getting this API right and properly
handle cancel and destruction of the target object is more complicated and
complex. But this but is not just for the sake of a nicer API. This fixes
actual issues explained above.
Also, get rid of GAsyncResult to track information about the pending request.
Instead, allocate our own handle structure, which ends up to be nicer
because it's strongly typed and has exactly the properties that are
useful to track the request. Also, it gets rid of the awkward
_finish() API by passing the relevant arguments to the callback
directly.
2018-01-05 17:46:49 +01:00
|
|
|
#if WITH_CONCHECK
|
|
|
|
|
curl_global_cleanup();
|
|
|
|
|
#endif
|
|
|
|
|
|
2017-03-20 13:36:00 +00:00
|
|
|
if (priv->config) {
|
|
|
|
|
g_signal_handlers_disconnect_by_func(priv->config, config_changed_cb, self);
|
|
|
|
|
g_clear_object(&priv->config);
|
|
|
|
|
}
|
|
|
|
|
|
2016-02-01 13:15:40 +01:00
|
|
|
G_OBJECT_CLASS(nm_connectivity_parent_class)->dispose(object);
|
2011-10-21 21:21:30 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
nm_connectivity_class_init(NMConnectivityClass *klass)
|
|
|
|
|
{
|
|
|
|
|
GObjectClass *object_class = G_OBJECT_CLASS(klass);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
connectivity: schedule connectivity timers per-device and probe for short outages
It might happen, that connectivitiy is lost only for a moment and
returns soon after. Based on that assumption, when we loose connectivity
we want to have a probe interval where we check for returning
connectivity more frequently.
For that, we handle tracking of the timeouts per-device.
The intervall shall start with 1 seconds, and double the interval time until
the full interval is reached. Actually, due to the implementation, it's unlikely
that we already perform the second check 1 second later. That is because commonly
the first check returns before the one second timeout is reached and bumps the
interval to 2 seconds right away.
Also, we go through extra lengths so that manual connectivity check
delay the periodic checks. By being more smart about that, we can reduce
the number of connectivity checks, but still keeping the promise to
check at least within the requested interval.
The complexity of book keeping the timeouts is remarkable. But I think
it is worth the effort and we should try hard to
- have a connectivity state as accurate as possible. Clearly,
connectivity checking means that we probing, so being more intelligent
about timeout and backoff timers can result in a better connectivity
state. The connectivity state is important because we use it for
the default-route penaly and the GUI indicates bad connectivity.
- be intelligent about avoiding redundant connectivity checks. While
we want to check often to get an accurate connectivity state, we
also want to minimize the number of HTTP requests, in case the
connectivity is established and suppossedly stable.
Also, perform connectivity checks in every state of the device.
Even if a device is disconnected, it still might have connectivity,
for example if the user externally adds an IP address on an unmanaged
device.
https://bugzilla.gnome.org/show_bug.cgi?id=792240
2018-02-20 21:41:14 +01:00
|
|
|
signals[CONFIG_CHANGED] = g_signal_new(NM_CONNECTIVITY_CONFIG_CHANGED,
|
2017-03-20 13:36:00 +00:00
|
|
|
G_OBJECT_CLASS_TYPE(object_class),
|
|
|
|
|
G_SIGNAL_RUN_FIRST,
|
|
|
|
|
0,
|
|
|
|
|
NULL,
|
|
|
|
|
NULL,
|
|
|
|
|
NULL,
|
|
|
|
|
G_TYPE_NONE,
|
|
|
|
|
0);
|
2020-09-28 16:03:33 +02:00
|
|
|
|
2017-03-20 13:36:00 +00:00
|
|
|
object_class->dispose = dispose;
|
2011-10-21 21:21:30 +02:00
|
|
|
}
|