mirror of
https://gitlab.freedesktop.org/mesa/mesa.git
synced 2025-12-25 06:30:10 +01:00
The previous algorithm had a race because of the way we were using __sync_fetch_and_add for everything. In particular, the concept of "returning" over-allocated states in the "next > end" case was completely bogus. If too many threads were hitting the state pool at the same time, it was possible to have the following sequence: A: Get an offset (next == end) B: Get an offset (next > end) A: Resize the pool (now next < end by a lot) C: Get an offset (next < end) B: Return the over-allocated offset D: Get an offset in which case D will get the same offset as C. The solution to this race is to get rid of the concept of "returning" over-allocated states. Instead, the thread that gets a new block simply sets the next and end offsets directly and threads that over-allocate don't return anything and just futex-wait. Since you can only ever hit the over-allocate case if someone else hit the "next == end" case and hasn't resized yet, you're guaranteed that the end value will get updated and the futex won't block forever. |
||
|---|---|---|
| .. | ||
| .gitignore | ||
| anv_allocator.c | ||
| anv_aub.c | ||
| anv_aub.h | ||
| anv_batch_chain.c | ||
| anv_cmd_buffer.c | ||
| anv_compiler.cpp | ||
| anv_device.c | ||
| anv_entrypoints_gen.py | ||
| anv_formats.c | ||
| anv_gem.c | ||
| anv_image.c | ||
| anv_intel.c | ||
| anv_meta.c | ||
| anv_pipeline.c | ||
| anv_private.h | ||
| anv_query.c | ||
| anv_util.c | ||
| anv_x11.c | ||
| gen7_pack.h | ||
| gen8_pack.h | ||
| gen75_pack.h | ||
| glsl_scraper.py | ||
| Makefile.am | ||