mirror of
https://gitlab.freedesktop.org/NetworkManager/NetworkManager.git
synced 2026-02-11 16:30:39 +01:00
Previously, the test would kick off 15 processes in parallel, but the first job in the queue would block more processes from being started. That is, async_start() would only start 15 processes, but since none of them were reaped before async_wait() was called, no more than 15 jobs were running during the start phase. That is not a real issue, because the start phase is non-blocking and queues all the jobs quickly. It's not really expected that during that time many processes already completed. Anyway, this was a bit ugly. The bigger problem is that async_wait() would always block for the first job to complete, before starting more processes. That means, if the first job in the queue takes unusually long, then this blocks other processes from getting reaped and new processes from being started. Instead, don't block only one one jobs, but poll them in turn for a short amount of time. Whichever process exits first will be completed and more jobs will be started. In fact, in the current setup it's hard to notice any difference, because all nmcli invocations take about the same time and are relatively fast. That this approach parallelizes better can be seen when the runtime of jobs varies stronger (and some invocations take a notably longer time). As we later want to run nmcli under valgrind, this probably will make a difference. An alternative would be not to poll()/wait() for child processes, but somehow get notified. For example, we could use a GMainContext and watch child processes. But that's probably more complicated to do, so let's keep the naive approach with polling. |
||
|---|---|---|
| .. | ||
| cli | ||
| common | ||
| tests | ||
| tui | ||
| meson.build | ||
| nm-online.c | ||