Commit graph

118 commits

Author SHA1 Message Date
Jason Ekstrand
b29cf7daf3 anv: Make anv_vma_alloc/free a lot dumber
All they do now is take a size, align, and flags and figure out which
heap to allocate in.  All of the actual code to deal with the BO is in
anv_allocator.c.  We want to leave anv_vma_alloc/free in anv_device.c
because it deals with API-exposed heaps so it still makes sense to have
it there.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3519>
2020-01-25 02:18:33 +00:00
Jason Ekstrand
cb6ea77045 anv: Take an anv_device in vk_errorf
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3461>
2020-01-20 22:08:52 +00:00
Jason Ekstrand
70e8064e13 anv: Add an anv_physical_device field to anv_device
Having to always pull the physical device from the instance has been
annoying for almost as long as the driver has existed.  It also won't
work in a world where we ever have more than one physical device.  This
commit adds a new field called "physical" to anv_device and switches
every location where we use device->instance->physicalDevice to use the
new field instead.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3461>
2020-01-20 22:08:52 +00:00
Eric Engestrom
c327245257 anv: drop unused #include
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-12-13 20:42:40 +00:00
Jason Ekstrand
bce1c3c668 anv: Re-capture all batch and state buffers
When we moved from allocating BOs directly to using the BO cache, we
lost the EXEC_OBJECT_CAPTURE flag on all our state buffers.

Fixes: 3119b96bdf "anv: Allocate block pool BOs from the cache"
Fixes: ee77938733 "anv: Allocate batch and fence buffers from..."
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-12-07 04:03:35 +00:00
Jason Ekstrand
a8e59b3708 anv: Add allocator support for client-visible addresses
When a BO is flagged as having a client visible address, we put it in
its own heap.  We also support the client explicitly specifying an
address in said heap.  If an address collision happens, we return false
from anv_vma_alloc which turns into a VK_ERROR_OUT_OF_DEVICE_MEMORY.

Reviewed-by: Ivan Briano <ivan.briano@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-12-05 10:59:10 -06:00
Jason Ekstrand
03450e9cfc anv: Add an explicit_address parameter to anv_device_alloc_bo
We already have a mechanism for specifying that we want a fixed address
provided by the driver internals.  We're about to let the client start
specifying addresses in some very special scenarios as well so we want
to pass this through to the allocation function.

Reviewed-by: Ivan Briano <ivan.briano@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-12-05 10:59:10 -06:00
Jason Ekstrand
0bba88081b anv: Drop bo_flags from anv_bo_pool
In ee77938733, we started using the BO cache for anv_bo_pool and
stopped using the bo_flags parameter.  However, we never dropped it from
the struct or the init function.

Reviewed-by: Ivan Briano <ivan.briano@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-12-05 10:58:14 -06:00
Jason Ekstrand
0ca0ad1252 anv: Zero released anv_bo structs
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:09 +00:00
Jason Ekstrand
63d7a38630 anv: Drop anv_bo_init and anv_bo_init_new
BOs are now only ever allocated through the BO cache so there's no need
to have these exposed.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:09 +00:00
Jason Ekstrand
d0ec55d5a3 anv: Allocate scratch BOs from the cache
While we're here, we get rid of the locking and use a lock-free
algorithm.  The chances of spilling contention are low and this is
actually a bit simpler in some ways.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:09 +00:00
Jason Ekstrand
ee77938733 anv: Allocate batch and fence buffers from the cache
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:09 +00:00
Jason Ekstrand
3119b96bdf anv: Allocate block pool BOs from the cache
This commit switches block pools over to being allocated from the BO
cache rather than being allocated manually by the block pool.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:09 +00:00
Jason Ekstrand
5c664dff75 anv: Choose BO flags internally in anv_block_pool
All block pools are allocated with the same flags.  There's no good
reason why it needs to be configurable.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:09 +00:00
Jason Ekstrand
a44f5ee0d8 anv: Rework the internal BO allocation API
This makes a number of changes to the current API:

 1. Everything is renamed to anv_device_* instead of anv_bo_cache_*
    because the BO cache is soon going to be the sole BO allocation path
    and not some special case to make import/export work.

 2. Drop the cache parameter.  It's totally redundant with the device
    and just annoying to keep typing.

 3. Rework flags so that they go the convenient direction for usage in
    ANV rather than whichever awkward way the i915 specified it to
    maintain backwards compatibility.  This also gives us the
    opportunity to set some defaults.

 4. Add flags for mapping and coherency.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:09 +00:00
Jason Ekstrand
3178e583c8 anv: Rework anv_block_pool_expand_range
The growing algorithms for the softpin case and the userptr version are
almost entirely different.  Having this weird join doesn't make the code
more comprehensible.  This rework does a few things:

 1. Move the comment about 48-bit addresses to anv_device_init where we
    actually unset the EXEC_OBJECT_SUPPORTS_48B_ADDRESS flag.

 2. Separate the paths in anv_block_pool_expand_range so it's easier to
    see what happens in the two different cases.

 3. Use the anv_block_poo::bos array for storing all allocated BOs in
    both paths rather than using the cleanup list in both paths.  This
    lets us make the cleanups array only used for mmaps of the memfd for
    the userptr case.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Jason Ekstrand
bb257e1852 anv: Fix a potential BO handle leak
Fixes: 731c4adcf9 "anv/allocator: Add support for non-userptr"
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Jason Ekstrand
6f4fa81769 anv: Handle state pool relocations using "wrapper" BOs
Instead of depending on a mutable BO in the state pool for handling
growing state pools, add a concept of "wrapper" BOs which just wrap an
actual BO.  This way, the wrapper can exist once for all of time and we
can put it in relocation lists even if the actual BO it references gets
swapped out.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Jason Ekstrand
b781c85c79 anv: Replace ANV_BO_EXTERNAL with anv_bo::is_external
We're not THAT strapped for space that we can't burn one extra bit for
a boolean.  If we're really worried about it, we can always shrink the
flags field to 16 bits because the kernel only uses 7 currently.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Jason Ekstrand
5534358ef6 anv: Inline anv_block_pool_get_bo
It has exactly one caller and we're about to change some of the dynamics
which would make this confusing as a separate function.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Jason Ekstrand
c0a4722f29 anv: Declare the bo in the anv_block_pool_foreach_bo loop
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Jason Ekstrand
bbf389013f anv: Use a util_sparse_array for the GEM handle -> BO map
This lets us do less allocation because the anv_bo's are now embedded in
the sparse array and it also allows lock-free translation from GEM
handle to BO which will be useful in future commits.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Jason Ekstrand
821ce7be36 anv: Move refcount to anv_bo
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-10-31 13:46:08 +00:00
Kenneth Graunke
b9e93db208 intel: Increase Gen11 compute shader scratch IDs to 64.
From the MEDIA_VFE_STATE docs:

   "Starting with this configuration, the Maximum Number of Threads must
    be set to (#EU * 8) for GPGPU dispatches.

    Although there are only 7 threads per EU in the configuration, the
    FFTID is calculated as if there are 8 threads per EU, which in turn
    requires a larger amount of Scratch Space to be allocated by the
    driver."

It's pretty clear that we need to increase this for scratch address
calculations, because the FFTID has a certain bit-pattern.  The quote
above seems to indicate that we should increase the actual thread count
programmed in MEDIA_VFE_STATE as well, but we think the intention is to
only bump the scratch space.

Fixes GPU hangs in Bioshock Infinite and Synmark's CSDof on Icelake 8x8.

Fixes: 5ac804bd9a ("intel: Add a preliminary device for Ice Lake")
Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-09-23 16:59:40 -07:00
Greg V
7b520dc74f anv: add MAP_POPULATE fallback define for portability
FreeBSD does not have MAP_POPULATE

Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
2019-08-08 21:44:33 +01:00
Greg V
c0376a1234 util: add anon_file.h for all memfd/temp file usage
Move the Weston os_create_anonymous_file code from egl/wayland into util,
add support for Linux memfd and FreeBSD SHM_ANON,
use that code in anv/aubinator instead of explicit memfd calls for portability.

Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
2019-08-07 22:57:55 +00:00
Caio Marcelo de Oliveira Filho
09c4037dda anv: Fix pool allocator when first alloc needs to grow
When using softpin, the first allocation was not calculating the
padding and offset correctly for the case the first allocation needed
to grow.  We were missing initialize the state.end right after
expanding the pool for the first time.

This is not a problem for non-softpin since there we don't use
leftover padding so the ends would re-arrange incrementally.

This fixes running dEQP-VK.ssbo.phys.layout.random.16bit.scalar.13 in
SKL -- the test uses a shader larger than the initial size for the
instruction pool.

Fixes: dfc9ab2ccd "anv/allocator: Add padding information."
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-07-12 22:25:37 -07:00
Lionel Landwerlin
f2f6ac1c08 anv: Use corresponding type from the vector allocation
We didn't notice this issue much because the 2 struct share a similar
layout, expect for the additional fields...

We run into that issue in Anv :

==15236== Invalid write of size 8
==15236==    at 0x8CF3939C: anv_state_table_expand_range (anv_allocator.c:211)
==15236==    by 0x8CF394D5: anv_state_table_grow (anv_allocator.c:264)
==15236==    by 0x8CF3967E: anv_state_table_add (anv_allocator.c:312)
==15236==    by 0x8CF3B13C: anv_state_pool_alloc_no_vg (anv_allocator.c:1167)
==15236==    by 0x8CF3B2B0: anv_state_pool_alloc (anv_allocator.c:1190)
==15236==    by 0x8CF60871: alloc_surface_state (anv_image.c:1122)
==15236==    by 0x8CF61FF9: anv_CreateImageView (anv_image.c:1519)
==15236==    by 0x8BCBD2ED: vkCreateImageView (trampoline.c:1358)
==15236==  Address 0x8994ef10 is 0 bytes after a block of size 128 alloc'd
==15236==    at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==15236==    by 0x8D2578E6: u_vector_init (u_vector.c:47)
==15236==    by 0x8CF3929A: anv_state_table_init (anv_allocator.c:168)
==15236==    by 0x8CF3A99A: anv_state_pool_init (anv_allocator.c:921)
==15236==    by 0x8CF56517: anv_CreateDevice (anv_device.c:1909)
==15236==    by 0x8BCB4FBA: terminator_CreateDevice (loader.c:6073)
==15236==    by 0x8DD2CB3D: ??? (in /home/djdeath/.steam/ubuntu12_64/libVkLayer_steam_fossilize.so)
==15236==    by 0x8DF4D241: vkCreateDevice (in /home/djdeath/.steam/ubuntu12_64/steamoverlayvulkanlayer.so)
==15236==    by 0x8BCB35C6: loader_create_device_chain (loader.c:5449)
==15236==    by 0x8BCBC230: vkCreateDevice (trampoline.c:838)

v2: Rename mmap_cleanups to avoid confusion (Caio)

v3: s/fail_mmap_cleanups/fail_cleanups/ (Caio)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110648
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-05-09 21:57:26 +01:00
Tapani Pälli
a9555f37d5 anv: use anv_gem_munmap in block pool cleanup
Use anv_gem_munmap for unmap when softpin in use, this corresponds to
anv_gem_mmap used in anv_block_pool_expand_range. This fixes valgrind
errors seen for each pool when softpin is in use:

  ==25581== 262,144 bytes in 1 blocks are definitely lost in loss record 31 of 31
  ==25581==    at 0x50E77E8: anv_gem_mmap (anv_gem.c:96)
  ==25581==    by 0x50EEE2B: anv_block_pool_expand_range (anv_allocator.c:543)
  ==25581==    by 0x50EEB51: anv_block_pool_init (anv_allocator.c:477)
  ==25581==    by 0x50EF7EF: anv_state_pool_init (anv_allocator.c:920)
  ==25581==    by 0x510B8EB: anv_CreateDevice (anv_device.c:2031)

Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-03-07 07:36:28 +02:00
Caio Marcelo de Oliveira Filho
69cc6272fb anv: Implement VK_EXT_external_memory_host
v2: Ignore the import if handleType == 0. (Jason)

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-03-05 12:59:50 -08:00
Jason Ekstrand
9b202239ba anv: Silence some compiler warnings in release builds
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-02-14 16:04:45 -06:00
Rafael Antognolli
f2ece26601 anv/allocator: Avoid race condition in anv_block_pool_map.
Accessing bo->map and then pool->center_bo_offset without a lock is
racy. One way of avoiding such race condition is to store the bo->map +
center_bo_offset into pool->map at the time the block pool is growing,
which happens within a lock.

v2: Only set pool->map if not using softpin (Jason).
v3: Move things around and only update center_bo_offset if not using
softpin too (Jason).

Cc: Jason Ekstrand <jason@jlekstrand.net>
Reported-by: Ian Romanick <idr@freedesktop.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109442
Fixes: fc3f588320
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-24 17:39:40 -08:00
Rafael Antognolli
731c4adcf9 anv/allocator: Add support for non-userptr.
If softpin is supported, create new BOs for the required size and add the
respective BO maps. The other main change of this commit is that
anv_block_pool_map() now returns the map for the BO that the given
offset is part of. So there's no block_pool->map access anymore (when
softpin is used.

v3:
 - set fd to -1 on softpin case (Jason)

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:24 -08:00
Rafael Antognolli
5d61c74f3d anv/allocator: Enable snooping on block pool and anv_bo_pool BOs.
We are not going to use userptr for anv block pool BOs anymore. However,
so far we have been relying on the fact that userptr BOs are snooped on
non-llc platforms. Let's make sure that the block pool BOs are still
snooped, and we can also remove the clflush'ing that we do on all state
buffers.

And since we plan to remove the flushes, set the anv_bo_pool BOs to
cached (snooped on non-LLC platforms) too. For LLC platforms, they are
all cached by default, so this becomes a no-op.

v5:
 - Add snooping to anv_bo_pool BOs too (Jason).
 - Remove anv_gem_set_domain.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:20 -08:00
Rafael Antognolli
dfc9ab2ccd anv/allocator: Add padding information.
It's possible that we still have some space left in the block pool, but
we try to allocate a state larger than that state. This means such state
would start somewhere within the range of the old block_pool, and end
after that range, within the range of the new size.

That's fine when we use userptr, since the memory in the block pool is
CPU mapped continuously. However, by the end of this series, we will
have the block_pool split into different BOs, with different CPU
mapping ranges that are not necessarily continuous. So we must avoid
such case of a given state being part of two different BOs in the block
pool.

This commit solves the issue by detecting that we are growing the
block_pool even though we are not at the end of the range. If that
happens, we don't use the space left at the end of the old size, and
consider it as "padding" that can't be used in the allocation. We update
the size requested from the block pool to take the padding into account,
and return the offset after the padding, which happens to be at the
start of the new address range.

Additionally, we return the amount of padding we used, so the caller
knows that this happens and can return that padding back into a list of
free states, that can be reused later. This way we hopefully don't waste
any space, but also avoid having a state split between two different
BOs.

v3:
 - Calculate offset + padding at anv_block_pool_alloc_new (Jason).
v4:
 - Remove extra "leftover".

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:19 -08:00
Rafael Antognolli
7ed0898a8d anv/allocator: Rework chunk return to the state pool.
This commit tries to rework the code that split and returns chunks back
to the state pool, while still keeping the same logic.

The original code would get a chunk larger than we need and split it
into pool->block_size. Then it would return all but the first one, and
would split that first one into alloc_size chunks. Then it would keep
the first one (for the allocation), and return the others back to the
pool.

The new anv_state_pool_return_chunk() function will take a chunk (with
the alloc_size part removed), and a small_size hint. It then splits that
chunk into pool->block_size'd chunks, and if there's some space still
left, split that into small_size chunks. small_size in this case is the
same size as alloc_size.

The idea is to keep the same logic, but make it in a way we can reuse it
to return other chunks to the pool when we are growing the buffer.

v2:
 - Include Jason's suggestions to the algorithm that returns chunks.
 - Update comments.

v3:
 - Disallow returning 0 blocks (Jason).
 - fix min_size in the loop (Jason).
 - remove temporary variables (Jason)
v4:
 - return_chunk() should never return blocks larger than
 pool->block_size.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:17 -08:00
Rafael Antognolli
f874604f45 anv/allocator: Add support for a list of BOs in block pool.
So far we use only one BO (the last one created) in the block pool. When
we switch to not use the userptr API, we will need multiple BOs. So add
code now to store multiple BOs in the block pool.

This has several implications, the main one being that we can't use
pool->map as before. For that reason we update the getter to find which
BO a given offset is part of, and return the respective map.

v3:
 - Simplify anv_block_pool_map (Jason).
 - Use fixed size array for anv_bo's (Jason)
v4:
 - Respect the order (item, container) in anv_block_pool_foreach_bo
 (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:04 -08:00
Rafael Antognolli
e3dc56d731 anv: Update usage of block_pool->bo.
Change block_pool->bo to be a pointer, and update its usage everywhere.
This makes it simpler to switch it later to a list of BOs.

v3:
 - Use a static "bos" field in the struct, instead of malloc'ing it.
 This will be later changed to a fixed length array of BOs.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:02 -08:00
Rafael Antognolli
fc3f588320 anv/allocator: Remove pool->map.
After switching to using anv_state_table, there are very few places left
still using pool->map directly. We want to avoid that because it won't
be always the right map once we split it into multiple BOs.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:00 -08:00
Rafael Antognolli
54e21e145e anv/allocator: Rename anv_free_list2 to anv_free_list.
Now that we removed the original anv_free_list, we can now use its name.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:58 -08:00
Rafael Antognolli
234c9d8a40 anv/allocator: Remove anv_free_list.
The next commit already renames anv_free_list2 -> anv_free_list since
the old one is gone.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:56 -08:00
Rafael Antognolli
e2179aceaf anv/allocator: Use anv_state_table on back_alloc too.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:52 -08:00
Rafael Antognolli
d18267fb48 anv/allocator: Use anv_state_table on anv_state_pool_alloc.
Use anv_state_pool_return_blocks() to return blocks to the pool, instead
of manually pushing them.

v3:
 - return blocks from the end of the chunk (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:50 -08:00
Rafael Antognolli
6a1dcfe73d anv/allocator: Add helper to push states back to the state table.
The use of anv_state_table_add() combined with anv_state_table_push(),
specially when adding a bunch of states to the table, is very verbose.
So we add this helper that makes things easier to digest.

We also already add the anv_state_table member in this commit, so things
can compile properly, even though it's not used.

v2: assert that the states are always aligned to their size (Jason)
v3: Add "table" member to anv_state_pool in this commit.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:47 -08:00
Rafael Antognolli
e8b6e0a5ba anv/allocator: Add getter for anv_block_pool.
We will need the anv_block_pool_map to find the map relative to some BO
that is not at the start of the block pool.

v2: just return a pointer instead of a struct (Jason)
v4: Update comment (Jason)

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:43 -08:00
Rafael Antognolli
6a2d5ae305 anv/allocator: Add anv_state_table.
Add a structure to hold anv_states. This table will initially be used to
recycle anv_states, instead of relying on a linked list implemented in
GPU memory. Later it could be used so that all anv_states just point to
the content of this struct, instead of making copies of anv_states
everywhere.

One has to call anv_state_table_add(), which returns an index for the
state in the table, and then get a pointer to such index, and finally
fill in the rest of the struct.

TODO:
   1) There's a lot of common code between this table backing store
   memory and the anv_block_pool buffer, due to how we grow it. I think
   it's possible to refactory this and reuse code on both places.

   2) Add unit tests.

v3:
 - Rename state table memfd (Jason)
 - Return VK_ERROR_OUT_OF_HOST_MEMORY on more places (Jason)
 - anv_state_table_grow returns VkResult (Jason)
 - Rename variables to be more informative (Jason)
 - Return errors on state table grow.
 - Rename anv_state_table_push/pop to anv_free_list_push2/pop2
   This will be renamed again to remove the trailing "2" later.

v4:
 - Remove exit(-1) from anv_state_table (Jason).
 - Use uint32_t "next" field in anv_free_entry (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:34 -08:00
Caio Marcelo de Oliveira Filho
09c3ff01df src/intel: use new hash table and set creation helpers
Replace calls to create hash tables and sets that use
_mesa_hash_pointer/_mesa_key_pointer_equal with the helpers
_mesa_pointer_hash_table_create() and _mesa_pointer_set_create().

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Eric Engestrom <eric@engestrom.ch>
2019-01-14 10:49:33 -08:00
Eric Engestrom
4f5a526789 anv: drop unneeded KHR suffix
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-08 18:47:56 +00:00
Dave Airlie
29a7631986 anv: add missing unlock in error path.
Not going to matter, but be consistent.

Found by coverity

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Fixes: caf41c78c (anv/allocator: Support softpin in the BO cache)
2018-10-11 09:50:27 +10:00
Jason Ekstrand
7a89a0d9ed anv: Use separate MOCS settings for external BOs
On Broadwell and above, we have to use different MOCS settings to allow
the kernel to take over and disable caching when needed for external
buffers.  On Broadwell, this is especially important because the kernel
can't disable eLLC so we have to do it in userspace.  We very badly
don't want to do that on everything so we need separate MOCS for
external and internal BOs.

In order to do this, we add an anv-specific BO flag for "external" and
use that to distinguish between buffers which may be shared with other
processes and/or display and those which are entirely internal.  That,
together with an anv_mocs_for_bo helper lets us choose the right MOCS
settings for each BO use.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99507
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2018-10-03 09:03:03 -05:00