Commit graph

22 commits

Author SHA1 Message Date
Jordan Justen
de65d4dcaf anv: Fix build without VALGRIND
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
2016-01-06 15:54:51 -08:00
Jason Ekstrand
e6fc170afb anv/allocator: Rework state streams again
If we're going to hav valgrind verify state streams then we need to ensure
that once we choose a pointer into a block we always use that pointer until
the block is freed.  I was trying to do this with the "current_map" thing.
However, that breaks down because you have to use the map from the block
pool to get to the stream_block to get at current_map.  Instead, this
commit changes things to track the stream_block by pointer instead of by
offset into the block pool.
2015-12-30 11:40:38 -08:00
Jason Ekstrand
a0b2829f20 anv/stream_alloc: Properly manage valgrind NOACCESS and UNDEFINED status
When I first did the valgrindifying for stream allocators, I misunderstood
some things about valgrind's expectations for NOACCESS and UNDEFINED.
First off, valgrind expects things to be marked NOACCESS before you
allocate out of them.  Since our blocks came from a pool backed by a
mmapped memfd, they came in as UNDEFINED; we needed to mark them as
NOACCESS.  Also, I didn't realize that VALGRIND_MEMPOOL_CHANGE only updated
the mempool allocation state and didn't actually change definedness; we had
to add a VALGRIND_MAKE_MEM_UNDEFINED to get rid of the NOACCESS on the
newly allocated portion.
2015-12-30 10:36:19 -08:00
Kristian Høgsberg Kristensen
f1f78a371e vk: gem handles are uint32_t
No functional difference, but lets be consistent with the kernel API.
2015-12-04 12:53:27 -08:00
Kristian Høgsberg Kristensen
bbb6875f35 vk: Map uncached, coherent memory as write-combine
This gives us the required characteristics for the memory type.
2015-12-04 09:51:47 -08:00
Kristian Høgsberg
b431cf59a3 vk: Set I915_CACHING_NONE for userptr BOs when !llc
Regular objects are created I915_CACHING_CACHED on LLC platforms and
I915_CACHING_NONE on non-LLC platforms. However, userptr objects are
always created as I915_CACHING_CACHED, which on non-LLC means
snooped. That can be useful but comes with a bit of overheard.  Since
we're eplicitly clflushing and don't want the overhead we need to turn
it off.
2015-12-04 09:51:47 -08:00
Jason Ekstrand
10f97718c3 anv/allocator: Add a sanity assertion in state stream finish.
We assert that the block offset we got while walking the list of blocks is
actually a multiple of the block size.  If something goes wrong and the GPU
decides to stomp on the surface state buffer we can end up getting
corruptions in our list of blocks.  This assertion makes such corruptions a
crash with a meaningful message rather than an infinite loop.
2015-10-02 16:24:42 -07:00
Jason Ekstrand
e1a7c721d3 anv/allocator: Don't ever call mremap
This has always been a bit sketchy and neither Kristian nor I have ever
really liked it.
2015-09-24 08:42:14 -07:00
Jason Ekstrand
429665823d anv/allocator: Do a better job of centering bi-directional block pools 2015-09-24 08:41:47 -07:00
Jason Ekstrand
5f57ff7e18 anv/allocator: Make the block pool double-ended
This allows us to allocate from either side of the block pool in a
consistent way.  If you use the previous block_pool_alloc function, you
will get offsets from the start of the pool as normal.  If you use the new
block_pool_alloc_back function, you will get a negative index that
corresponds to something in the "back" of the pool.
2015-09-17 17:44:20 -07:00
Jason Ekstrand
55daed947d vk/allocator: Split block_pool_alloc into two functions 2015-09-17 17:44:20 -07:00
Jason Ekstrand
c55fa89251 anv/allocator: Use a signed 32-bit offset for the free list
This has the unfortunate side-effect of making it so that we can't have a
block pool bigger than 1GB.  However, that's unlikely to happen and, for
the sake of bi-directional block pools, we need to negative offsets.
2015-09-17 17:44:20 -07:00
Jason Ekstrand
8c6bc1e85d anv/allocator: Create 2GB memfd up-front for the block pool 2015-09-17 17:44:20 -07:00
Jason Ekstrand
74bf7aa07c anv/allocator: Take the device mutex when growing a block pool
We don't have any locking issues yet because we use the pool size itself as
a mutex in block_pool_alloc to guarantee that only one thread is resizing
at a time.  However, we are about to add support for growing the block pool
at both ends.  This introduces two potential races:

 1) You could have two block_pool_alloc() calls that both try to grow the
    block pool, one from each end.

 2) The relocation handling code will now have to think about not only the
    bo that we use for the block pool but also the offset from the start of
    that bo to the center of the block pool.  It's possible that the block
    pool growing code could race with the relocation handling code and get
    a bo and offset out of sync.

Grabbing the device mutex solves both of these problems.  Thanks to (2), we
can't really do anything more granular.
2015-09-17 17:44:20 -07:00
Jason Ekstrand
6a7ca4ef2c Merge remote-tracking branch 'mesa-public/master' into vulkan 2015-08-17 11:25:03 -07:00
Jason Ekstrand
facf587dea vk/allocator: Solve a data race in anv_block_pool
The anv_block_pool data structure suffered from the exact same race as the
state pool.  Namely, that the uniqueness of the blocks handed out depends
on the next_block value increasing monotonically.  However, this invariant
did not hold thanks to our block "return" concept.
2015-08-03 01:19:34 -07:00
Jason Ekstrand
56ce219493 vk/allocator: Make block_pool_grow take and return a size
It takes the old size as an argument and returns the new size as the return
value.  On error, it returns a size of 0.
2015-08-03 01:06:45 -07:00
Jason Ekstrand
fd64598462 vk/allocator: Fix a data race in the state pool
The previous algorithm had a race because of the way we were using
__sync_fetch_and_add for everything.  In particular, the concept of
"returning" over-allocated states in the "next > end" case was completely
bogus.  If too many threads were hitting the state pool at the same time,
it was possible to have the following sequence:

A: Get an offset (next == end)
B: Get an offset (next > end)
A: Resize the pool (now next < end by a lot)
C: Get an offset (next < end)
B: Return the over-allocated offset
D: Get an offset

in which case D will get the same offset as C.  The solution to this race
is to get rid of the concept of "returning" over-allocated states.
Instead, the thread that gets a new block simply sets the next and end
offsets directly and threads that over-allocate don't return anything and
just futex-wait.  Since you can only ever hit the over-allocate case if
someone else hit the "next == end" case and hasn't resized yet, you're
guaranteed that the end value will get updated and the futex won't block
forever.
2015-08-03 00:38:48 -07:00
Jason Ekstrand
481122f4ac vk/allocator: Make a few things more consistant 2015-08-03 00:35:19 -07:00
Jason Ekstrand
e65953146c vk/allocator: Use memory pools rather than (MALLOC|FREE)LIKE
We have pools, so we should be using them.  Also, I think this will help
keep valgrind from getting confused when we have to end up fighting with
system allocations such as those from malloc/free and mmap/munmap.
2015-07-31 10:38:28 -07:00
Jason Ekstrand
1920ef9675 vk/allocator: Add an anv_state_pool_finish function
Currently this is a no-op but it gives us a place to put finalization
things in the future.
2015-07-31 10:38:28 -07:00
Chad Versace
2c2233e328 vk: Prefix most filenames with anv
Jason started the task by creating anv_cmd_buffer.c and anv_cmd_emit.c.
This patch finishes the task by renaming all other files except
gen*_pack.h and glsl_scraper.py.
2015-07-17 20:25:38 -07:00
Renamed from src/vulkan/allocator.c (Browse further)