mirror of
https://gitlab.freedesktop.org/mesa/mesa.git
synced 2026-05-03 05:38:16 +02:00
gfxstream: mega-change to support guest Linux WSI with gfxstream
This is a mega-change to support Linux guest WSI with gfxstream. We tried to do a branch where every commit was buildable and runnable, but that quickly proved unworkable. So we squashed the branch into a mega-change. Zink provides the GL implementation for Linux guests, so we just needed to implement the proper Vulkan Wayland/X11 WSI entrypoints. The overall strategy to support this is to use Mesa's WSI functions. The Vulkan WSI layer was also considered: https://gitlab.freedesktop.org/mesa/vulkan-wsi-layer But it was less maintained compared to Mesa. The way Mesa common layers communicate with drivers is the through base objects embedded in driver and a common dispatch layer: https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/docs/vulkan/dispatch.rst https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/docs/vulkan/base-objs.rst Our objects are defined in gfxstream_vk_private.h. Currently, Mesa-derived Vulkan objects just serve as shim to gfxstream Vulkan’s internal handle mapping. Long-term, we can use Mesa-derived objects inside gfxstream guest Vulkan exclusively. The flow is typically inside a Vulkan entrypoint is: - VK_FROM_HANDLE(vk-object) to convert to a gfxstream_vk_obj object - Call ResourceTracker::func(gfxstream_vk_obj->internal) or VkEncoder::func(gfxstream_vk_obj>internal) - Return result A good follow-up cleanup would be to delete older gfxstream objects. For example, we now have struct gfxstream_vk_device and info_VkDevice in ResourceTracker. Most of this logic was auto-generated and included in func_table.cpp. Some vulkan functions were too difficult to auto-generate or required special logic, and these are included in gfxstream_vk_device.cpp. For example, anything that needs to setup the HostConnection requires special handling. Android Blueprint support is added to the parts of Mesa needed to build the Vulkan runtime. One thing to call out it's required to build the guest/vulkan_enc and guest/vulkan files in the same shared library now, when previously have libvulkan_enc.so and libvulkan_ranchu.so was sufficient [otherwise, some weak pointer logic wouldn't work]. A side effect of this is libOpenglSystem must also be a static lib, and so should libandroid_aemu too. That conceptually makes sense and the Meson build had been doing this all a long. We can probably transition everything besides libGLESv1_emulation.so, libGLESv2_emulation.so and libvulkan_ranchu.so to be static. This requires changes in the end2end tests, because since each HostConnection is separate and internal to it's constituent library. Lifetimes need to be managed separately: for example the HostConnection instance created by the end2end tests would not be visible inside libvulkan_ranchu.so anymore. Probably the best solution would be improve the testing facade so a HostConnection represents one virtio-gpu context, while some other entity represents a virtio-gpu device (client-server would work). vk.xml was modified, but change sent to Khronos: https://gitlab.khronos.org/vulkan/vulkan/-/merge_requests/6325 Fuchsia builds still need to be migrated, but they already have Fuchsia Mesa with all the build rules so that shouldn't be too bad. Just need to copy them over the gfxstream/Mesa hybrid. The new command for building Linux guests is: meson amd64-build/ -Dvulkan-drivers="gfxstream" -Dgallium-drivers="" -Dvk-no-nir=true -Dopengl=false Big shout-out to Aaron Ruby, who did most of the gnarly codegen needed to get the function table logic to work. * Run Weston/vkcube on Linux and automotive platform * launch_cvd --gpu_mode=gfxstream vkcube * launch_cvd --gpu_mod=gfxstream_guest_angle * vkcube + 3D Mark Slingshot extreme work with guest ANGLE and GL-VK interop * GfxstreamEnd2EndTests * Some select dEQP tests Aaron Ruby (46): gfxstream: function table: remove entry points that are hand-written. gfxstream: function table: more changes gfxstream: function table: scope internal_arrays to encoder gfxstream: function table: autogenerate compoundType params gfxstream: add handwritten EnumeratePhysicalDeviceGroup entrypoint. gfxstream: function table: handle nested handle arrays gfxstream: function table: adding some handwritten implementations gfxstream: revert some unnecessary changes gfxstream: use vk_object_zalloc/free instead of vk_zalloc/free. gfxstream: revert most gfxstream objects to use vk_object_base gfxstream: function table: handwritten commmand-buffers/pools gfxstream: codegen functionality to handle special param gfxstream: function table: random fixes gfxstream: add vk_command_buffer_ops handlers gfxstream: func_table.py: Codegen support for nested compound type gfxstream: remove handwritten/add autogen entry points gfxstream: add gfxstream_vk_device.cpp gfxstream: query device and instance extensions early gfxstream: func_table: explicit allocation for nested arrays/compound types gfxstream: goldfish_vulkan: fix commandBuffer allocation. gfxstream: meson: Raise api_version in ICD config to 1.1. gfxstream: function table: add more handwritten entries gfxstream: goldfish_vulkan: update VkDescriptorSetAllocateInfo logic gfxstream: function table: NULL check on internal_object dereference gfxstream: function table: Remove POSTPROCESSES handling from functable gfxstream: mesa: Add 'gfxstream' as a -Dvulkan-drivers gfxstream: ResourceTracker: add some allowedExtensions gfxstream: gfxstream_vk_device: add wsi_common_entrypoints gfxstream: Move instance handling into gfxstream_vk_device.cpp gfxstream: ResourceTracker: Enable Linux WSI-related extensions gfxstream: wsi: add wsi_device initialization gfxstream: gfxstream_vk_device: use Mesa common physical device management gfxstream: ResourceTracker: translate mesa objects in user buffer gfxstream: exclude VkSampler and VkDescriptorSet objects from translation gfxstream: Add guest-side external memory support with colorBuffers. gfxstream: function table: Modify semaphoreList inputs to no-op semaphores gfxstream: function table: Allow VK_NULL_HANDLE for free/destroy APIs. gfxstream: cereal: Add VK_EXT_depth_clip_enable as supported feature. gfxstream: vulkan_enc: un-namespace vk_util.h and vk_struct_id.h gfxstream: gfxstream_vk_device.cpp: Support VK_KHR_surface and VK_*_surface gfxstream: vulkan_enc: Add support for Mesa-only extensions. gfxstream: ResourceTracker: Use DEVICE_TYPE_VIRTUAL_GPU always gfxstream: platform: add dma-buf export support with dedicatedBuffer. gfxstream: ResourceTracker: add VK_EXT_depth_clip_enable allowed extension gfxstream: ResourceTracker: external memory via QNX_screen_buffer extension gfxstream: Add VK_QNX_external_memory_screen_buffer to VulkanDispatch Gurchetan Singh (18): gfxstream: mesa: write Android.bp files gfxstream: generate gfxstream_vk_entrypoints.{c, h} gfxstream: vulkan_enc: add gfxstream_vk_private.h (objects) gfxstream: function table: modify function table to use gfxstream_vk_* gfxstream: compiles gfxstream: build system improvements gfxstream: ResourceTracker: don't crash without VkBindImageMemorySwapchainInfoKHR gfxstream: vk.xml: make some vkAcquireImageANDROID params optional gfxstream_vk_device: filter out swapchain maintenance guest side gfxstream: end2end: fixes for End2End tests gfxstream: func_table: custom vkEnumerateInstanceLayerProperties gfxstream: add VK_EXT_DEBUG_UTILS_EXTENSION_NAME into Mesa list gfxstream: clang-format guest code gfxstream: libandroid AEMU static gfxstream: vkEnumerateInstanceVersion gfxstream: vkCreateComputePipeLines gfxstream: make end2end tests happy gfxstream: delete prior vk.xml, vk_icd_gen.py Reviewed-by: Aaron Ruby <aruby@blackberry.com> Acked-by: Yonggang Luo <luoyonggang@gmail.com> Acked-by: Adam Jackson <ajax@redhat.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27246>
This commit is contained in:
parent
2354b8ce20
commit
7b50e62179
43 changed files with 3339 additions and 1335 deletions
|
|
@ -2,6 +2,10 @@ from .common.codegen import CodeGen, VulkanWrapperGenerator
|
|||
from .common.vulkantypes import \
|
||||
VulkanAPI, makeVulkanTypeSimple, iterateVulkanType
|
||||
from .common.vulkantypes import EXCLUDED_APIS
|
||||
from .common.vulkantypes import HANDLE_TYPES
|
||||
|
||||
import copy
|
||||
import re
|
||||
|
||||
RESOURCE_TRACKER_ENTRIES = [
|
||||
"vkEnumerateInstanceExtensionProperties",
|
||||
|
|
@ -91,13 +95,76 @@ SUCCESS_VAL = {
|
|||
"VkResult" : ["VK_SUCCESS"],
|
||||
}
|
||||
|
||||
POSTPROCESSES = {
|
||||
"vkResetCommandPool" : """if (vkResetCommandPool_VkResult_return == VK_SUCCESS) {
|
||||
ResourceTracker::get()->resetCommandPoolStagingInfo(commandPool);
|
||||
}""",
|
||||
"vkAllocateCommandBuffers" : """if (vkAllocateCommandBuffers_VkResult_return == VK_SUCCESS) {
|
||||
ResourceTracker::get()->addToCommandPool(pAllocateInfo->commandPool, pAllocateInfo->commandBufferCount, pCommandBuffers);
|
||||
}""",
|
||||
HANDWRITTEN_ENTRY_POINTS = [
|
||||
# Instance/device/physical-device special-handling, dispatch tables, etc..
|
||||
"vkCreateInstance",
|
||||
"vkDestroyInstance",
|
||||
"vkGetInstanceProcAddr",
|
||||
"vkEnumerateInstanceVersion",
|
||||
"vkEnumerateInstanceLayerProperties",
|
||||
"vkEnumerateInstanceExtensionProperties",
|
||||
"vkEnumerateDeviceExtensionProperties",
|
||||
"vkGetDeviceProcAddr",
|
||||
"vkEnumeratePhysicalDevices",
|
||||
"vkEnumeratePhysicalDeviceGroups",
|
||||
"vkCreateDevice",
|
||||
"vkDestroyDevice",
|
||||
"vkCreateComputePipelines",
|
||||
# Manual alloc/free + vk_*_init/free() call w/ special params
|
||||
"vkGetDeviceQueue",
|
||||
"vkGetDeviceQueue2",
|
||||
# Command pool/buffer handling
|
||||
"vkCreateCommandPool",
|
||||
"vkDestroyCommandPool",
|
||||
"vkAllocateCommandBuffers",
|
||||
"vkResetCommandPool",
|
||||
"vkFreeCommandBuffers",
|
||||
"vkResetCommandPool",
|
||||
# Special cases to handle struct translations in the pNext chain
|
||||
# TODO: Make a codegen module (use deepcopy as reference) to make this more robust
|
||||
"vkCmdBeginRenderPass2KHR",
|
||||
"vkCmdBeginRenderPass",
|
||||
"vkAllocateMemory",
|
||||
]
|
||||
|
||||
# TODO: handles with no equivalent gfxstream objects (yet).
|
||||
# Might need some special handling.
|
||||
HANDLES_DONT_TRANSLATE = {
|
||||
"VkSurfaceKHR",
|
||||
## The following objects have no need for mesa counterparts
|
||||
# Allows removal of handwritten create/destroy (for array).
|
||||
"VkDescriptorSet",
|
||||
# Bug in translation
|
||||
"VkSampler",
|
||||
"VkSamplerYcbcrConversion",
|
||||
}
|
||||
|
||||
# Handles whose gfxstream object have non-base-object vk_ structs
|
||||
# Optionally includes array of pairs of extraParams: {index, extraParam}
|
||||
# -1 means drop parameter of paramName specified by extraParam
|
||||
HANDLES_MESA_VK = {
|
||||
# Handwritten handlers (added here for completeness)
|
||||
"VkInstance" : None,
|
||||
"VkPhysicalDevice" : None,
|
||||
"VkDevice" : None,
|
||||
"VkQueue" : None,
|
||||
"VkCommandPool" : None,
|
||||
"VkCommandBuffer" : None,
|
||||
# Auto-generated creation/destroy
|
||||
"VkDeviceMemory" : None,
|
||||
"VkQueryPool" : None,
|
||||
"VkBuffer" : [[-1, "pMemoryRequirements"]],
|
||||
"VkBufferView" : None,
|
||||
"VkImage" : [[-1, "pMemoryRequirements"]],
|
||||
"VkImageView": [[1, "false /* driver_internal */"]],
|
||||
"VkSampler" : None,
|
||||
}
|
||||
|
||||
# Types that have a corresponding method for transforming
|
||||
# an input list to its internal counterpart
|
||||
TYPES_TRANSFORM_LIST_METHOD = {
|
||||
"VkSemaphore",
|
||||
"VkSemaphoreSubmitInfo",
|
||||
}
|
||||
|
||||
def is_cmdbuf_dispatch(api):
|
||||
|
|
@ -106,6 +173,59 @@ def is_cmdbuf_dispatch(api):
|
|||
def is_queue_dispatch(api):
|
||||
return "VkQueue" == api.parameters[0].typeName
|
||||
|
||||
def getCreateParam(api):
|
||||
for param in api.parameters:
|
||||
if param.isCreatedBy(api):
|
||||
return param
|
||||
return None
|
||||
|
||||
def getDestroyParam(api):
|
||||
for param in api.parameters:
|
||||
if param.isDestroyedBy(api):
|
||||
return param
|
||||
return None
|
||||
|
||||
# i.e. VkQueryPool --> vk_query_pool
|
||||
def typeNameToMesaType(typeName):
|
||||
vkTypeNameRegex = "(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])"
|
||||
words = re.split(vkTypeNameRegex, typeName)
|
||||
outputType = "vk"
|
||||
for word in words[1:]:
|
||||
outputType += "_"
|
||||
outputType += word.lower()
|
||||
return outputType
|
||||
|
||||
def typeNameToBaseName(typeName):
|
||||
return typeNameToMesaType(typeName)[len("vk_"):]
|
||||
|
||||
def paramNameToObjectName(paramName):
|
||||
return "gfxstream_%s" % paramName
|
||||
|
||||
def typeNameToVkObjectType(typeName):
|
||||
return "VK_OBJECT_TYPE_%s" % typeNameToBaseName(typeName).upper()
|
||||
|
||||
def typeNameToObjectType(typeName):
|
||||
return "gfxstream_vk_%s" % typeNameToBaseName(typeName)
|
||||
|
||||
def transformListFuncName(typeName):
|
||||
return "transform%sList" % (typeName)
|
||||
|
||||
def hasMesaVkObject(typeName):
|
||||
return typeName in HANDLES_MESA_VK
|
||||
|
||||
def isAllocatorParam(param):
|
||||
ALLOCATOR_TYPE_NAME = "VkAllocationCallbacks"
|
||||
return (param.pointerIndirectionLevels == 1
|
||||
and param.isConst
|
||||
and param.typeName == ALLOCATOR_TYPE_NAME)
|
||||
|
||||
def isArrayParam(param):
|
||||
return (1 == param.pointerIndirectionLevels
|
||||
and param.isConst
|
||||
and "len" in param.attribs)
|
||||
|
||||
INTERNAL_OBJECT_NAME = "internal_object"
|
||||
|
||||
class VulkanFuncTable(VulkanWrapperGenerator):
|
||||
def __init__(self, module, typeInfo):
|
||||
VulkanWrapperGenerator.__init__(self, module, typeInfo)
|
||||
|
|
@ -119,11 +239,6 @@ class VulkanFuncTable(VulkanWrapperGenerator):
|
|||
|
||||
def onBegin(self,):
|
||||
cgen = self.cgen
|
||||
cgen.line("static void sOnInvalidDynamicallyCheckedCall(const char* apiname, const char* neededFeature)")
|
||||
cgen.beginBlock()
|
||||
cgen.stmt("ALOGE(\"invalid call to %s: %s not supported\", apiname, neededFeature)")
|
||||
cgen.stmt("abort()")
|
||||
cgen.endBlock()
|
||||
self.module.appendImpl(cgen.swapCode())
|
||||
pass
|
||||
|
||||
|
|
@ -144,264 +259,365 @@ class VulkanFuncTable(VulkanWrapperGenerator):
|
|||
api = typeInfo.apis[name]
|
||||
self.entries.append(api)
|
||||
self.entryFeatures.append(self.feature)
|
||||
self.loopVars = ["i", "j", "k", "l", "m", "n"]
|
||||
self.loopVarIndex = 0
|
||||
|
||||
def genEncoderOrResourceTrackerCall(cgen, api, declareResources=True):
|
||||
cgen.stmt("AEMU_SCOPED_TRACE(\"%s\")" % api.name)
|
||||
def getNextLoopVar():
|
||||
if self.loopVarIndex >= len(self.loopVars):
|
||||
raise
|
||||
loopVar = self.loopVars[self.loopVarIndex]
|
||||
self.loopVarIndex += 1
|
||||
return loopVar
|
||||
|
||||
if is_cmdbuf_dispatch(api):
|
||||
cgen.stmt("auto vkEnc = ResourceTracker::getCommandBufferEncoder(commandBuffer)")
|
||||
elif is_queue_dispatch(api):
|
||||
cgen.stmt("auto vkEnc = ResourceTracker::getQueueEncoder(queue)")
|
||||
def isCompoundType(typeName):
|
||||
return typeInfo.isCompoundType(typeName)
|
||||
|
||||
def handleTranslationRequired(typeName):
|
||||
return typeName in HANDLE_TYPES and typeName not in HANDLES_DONT_TRANSLATE
|
||||
|
||||
def translationRequired(typeName):
|
||||
if isCompoundType(typeName):
|
||||
struct = typeInfo.structs[typeName]
|
||||
for member in struct.members:
|
||||
if translationRequired(member.typeName):
|
||||
return True
|
||||
return False
|
||||
else:
|
||||
cgen.stmt("auto vkEnc = ResourceTracker::getThreadLocalEncoder()")
|
||||
return handleTranslationRequired(typeName)
|
||||
|
||||
def genDestroyGfxstreamObjects():
|
||||
destroyParam = getDestroyParam(api)
|
||||
if not destroyParam:
|
||||
return
|
||||
if not translationRequired(destroyParam.typeName):
|
||||
return
|
||||
objectName = paramNameToObjectName(destroyParam.paramName)
|
||||
allocatorParam = "NULL"
|
||||
for p in api.parameters:
|
||||
if isAllocatorParam(p):
|
||||
allocatorParam = p.paramName
|
||||
if not hasMesaVkObject(destroyParam.typeName):
|
||||
deviceParam = api.parameters[0]
|
||||
if "VkDevice" != deviceParam.typeName:
|
||||
print("ERROR: Unhandled non-VkDevice parameters[0]: %s (for API: %s)" %(deviceParam.typeName, api.name))
|
||||
raise
|
||||
# call vk_object_free() directly
|
||||
mesaObjectDestroy = "(void *)%s" % objectName
|
||||
cgen.funcCall(
|
||||
None,
|
||||
"vk_object_free",
|
||||
["&%s->vk" % paramNameToObjectName(deviceParam.paramName), allocatorParam, mesaObjectDestroy]
|
||||
)
|
||||
else:
|
||||
baseName = typeNameToBaseName(destroyParam.typeName)
|
||||
# objectName for destroy always at the back
|
||||
mesaObjectPrimary = "&%s->vk" % paramNameToObjectName(api.parameters[0].paramName)
|
||||
mesaObjectDestroy = "&%s->vk" % objectName
|
||||
cgen.funcCall(
|
||||
None,
|
||||
"vk_%s_destroy" % (baseName),
|
||||
[mesaObjectPrimary, allocatorParam, mesaObjectDestroy]
|
||||
)
|
||||
|
||||
def genMesaObjectAlloc(allocCallLhs):
|
||||
deviceParam = api.parameters[0]
|
||||
if "VkDevice" != deviceParam.typeName:
|
||||
print("ERROR: Unhandled non-VkDevice parameters[0]: %s (for API: %s)" %(deviceParam.typeName, api.name))
|
||||
raise
|
||||
allocatorParam = "NULL"
|
||||
for p in api.parameters:
|
||||
if isAllocatorParam(p):
|
||||
allocatorParam = p.paramName
|
||||
createParam = getCreateParam(api)
|
||||
objectType = typeNameToObjectType(createParam.typeName)
|
||||
# Call vk_object_zalloc directly
|
||||
cgen.funcCall(
|
||||
allocCallLhs,
|
||||
"(%s *)vk_object_zalloc" % objectType,
|
||||
["&%s->vk" % paramNameToObjectName(deviceParam.paramName), allocatorParam, ("sizeof(%s)" % objectType), typeNameToVkObjectType(createParam.typeName)]
|
||||
)
|
||||
|
||||
def genMesaObjectCreate(createCallLhs):
|
||||
def dropParam(params, drop):
|
||||
for p in params:
|
||||
if p == drop:
|
||||
params.remove(p)
|
||||
return params
|
||||
createParam = getCreateParam(api)
|
||||
objectType = "struct %s" % typeNameToObjectType(createParam.typeName)
|
||||
modParams = copy.deepcopy(api.parameters)
|
||||
# Mod params for the vk_%s_create() call i.e. vk_buffer_create()
|
||||
for p in modParams:
|
||||
if p.paramName == createParam.paramName:
|
||||
modParams.remove(p)
|
||||
elif handleTranslationRequired(p.typeName):
|
||||
# Cast handle to the mesa type
|
||||
p.paramName = ("(%s*)%s" % (typeNameToMesaType(p.typeName), paramNameToObjectName(p.paramName)))
|
||||
mesaCreateParams = [p.paramName for p in modParams] + ["sizeof(%s)" % objectType]
|
||||
# Some special handling
|
||||
extraParams = HANDLES_MESA_VK[createParam.typeName]
|
||||
if extraParams:
|
||||
for pair in extraParams:
|
||||
if -1 == pair[0]:
|
||||
mesaCreateParams = dropParam(mesaCreateParams, pair[1])
|
||||
else:
|
||||
mesaCreateParams.insert(pair[0], pair[1])
|
||||
cgen.funcCall(
|
||||
createCallLhs,
|
||||
"(%s *)vk_%s_create" % (objectType, typeNameToBaseName(createParam.typeName)),
|
||||
mesaCreateParams
|
||||
)
|
||||
|
||||
# Alloc/create gfxstream_vk_* object
|
||||
def genCreateGfxstreamObjects():
|
||||
createParam = getCreateParam(api)
|
||||
if not createParam:
|
||||
return False
|
||||
if not handleTranslationRequired(createParam.typeName):
|
||||
return False
|
||||
objectType = "struct %s" % typeNameToObjectType(createParam.typeName)
|
||||
callLhs = "%s *%s" % (objectType, paramNameToObjectName(createParam.paramName))
|
||||
if hasMesaVkObject(createParam.typeName):
|
||||
genMesaObjectCreate(callLhs)
|
||||
else:
|
||||
genMesaObjectAlloc(callLhs)
|
||||
|
||||
retVar = api.getRetVarExpr()
|
||||
if retVar:
|
||||
retTypeName = api.getRetTypeExpr()
|
||||
# ex: vkCreateBuffer_VkResult_return = gfxstream_buffer ? VK_SUCCESS : VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
cgen.stmt("%s = %s ? %s : %s" %
|
||||
(retVar, paramNameToObjectName(createParam.paramName), SUCCESS_VAL[retTypeName][0], "VK_ERROR_OUT_OF_HOST_MEMORY"))
|
||||
return True
|
||||
|
||||
def genVkFromHandle(param, fromName):
|
||||
objectName = paramNameToObjectName(param.paramName)
|
||||
cgen.stmt("VK_FROM_HANDLE(%s, %s, %s)" %
|
||||
(typeNameToObjectType(param.typeName), objectName, fromName))
|
||||
return objectName
|
||||
|
||||
def genGetGfxstreamHandles():
|
||||
createParam = getCreateParam(api)
|
||||
for param in api.parameters:
|
||||
if not handleTranslationRequired(param.typeName):
|
||||
continue
|
||||
elif isArrayParam(param):
|
||||
continue
|
||||
elif param != createParam:
|
||||
if param.pointerIndirectionLevels > 0:
|
||||
print("ERROR: Unhandled pointerIndirectionLevels > 1 for API %s (param %s)" % (api.name, param.paramName))
|
||||
raise
|
||||
genVkFromHandle(param, param.paramName)
|
||||
|
||||
def internalNestedParamName(param):
|
||||
parentName = ""
|
||||
if param.parent:
|
||||
parentName = "_%s" % param.parent.typeName
|
||||
return "internal%s_%s" % (parentName, param.paramName)
|
||||
|
||||
def genInternalArrayDeclarations(param, countParamName, nestLevel=0):
|
||||
internalArray = None
|
||||
if 0 == nestLevel:
|
||||
internalArray = "internal_%s" % param.paramName
|
||||
cgen.stmt("std::vector<%s> %s(%s)" % (param.typeName, internalArray, countParamName))
|
||||
elif 1 == nestLevel or 2 == nestLevel:
|
||||
internalArray = internalNestedParamName(param)
|
||||
if isArrayParam(param):
|
||||
cgen.stmt("std::vector<std::vector<%s>> %s" % (param.typeName, internalArray))
|
||||
else:
|
||||
cgen.stmt("std::vector<%s> %s" % (param.typeName, internalArray))
|
||||
else:
|
||||
print("ERROR: nestLevel > 2 not verified.")
|
||||
raise
|
||||
if isCompoundType(param.typeName):
|
||||
for member in typeInfo.structs[param.typeName].members:
|
||||
if translationRequired(member.typeName):
|
||||
if handleTranslationRequired(member.typeName) and not isArrayParam(member):
|
||||
# No declarations for non-array handleType
|
||||
continue
|
||||
genInternalArrayDeclarations(member, countParamName, nestLevel + 1)
|
||||
return internalArray
|
||||
|
||||
def genInternalCompoundType(param, outName, inName, currLoopVar):
|
||||
nextLoopVar = None
|
||||
cgen.stmt("%s = %s" % (outName, inName))
|
||||
for member in typeInfo.structs[param.typeName].members:
|
||||
if not translationRequired(member.typeName):
|
||||
continue
|
||||
cgen.line("/* %s::%s */" % (param.typeName, member.paramName))
|
||||
nestedOutName = ("%s[%s]" % (internalNestedParamName(member), currLoopVar))
|
||||
if isArrayParam(member):
|
||||
countParamName = "%s.%s" % (outName, member.attribs["len"])
|
||||
inArrayName = "%s.%s" % (outName, member.paramName)
|
||||
cgen.stmt("%s.push_back(std::vector<%s>())" % (internalNestedParamName(member), member.typeName))
|
||||
if member.typeName in TYPES_TRANSFORM_LIST_METHOD:
|
||||
# Use the corresponding transformList call
|
||||
cgen.funcCall(nestedOutName, transformListFuncName(member.typeName), [inArrayName, countParamName])
|
||||
cgen.stmt("%s = %s.data()" % (inArrayName, nestedOutName))
|
||||
cgen.stmt("%s = %s.size()" % (countParamName, nestedOutName))
|
||||
else:
|
||||
# Standard translation
|
||||
cgen.stmt("%s.reserve(%s)" % (nestedOutName, countParamName))
|
||||
cgen.stmt("memset(&%s[0], 0, sizeof(%s) * %s)" % (nestedOutName, member.typeName, countParamName))
|
||||
if not nextLoopVar:
|
||||
nextLoopVar = getNextLoopVar()
|
||||
internalArray = genInternalArray(member, countParamName, nestedOutName, inArrayName, nextLoopVar)
|
||||
cgen.stmt("%s = %s" %(inArrayName, internalArray))
|
||||
elif isCompoundType(member.typeName):
|
||||
memberFullName = "%s.%s" % (outName, member.paramName)
|
||||
if 1 == member.pointerIndirectionLevels:
|
||||
cgen.beginIf(memberFullName)
|
||||
inParamName = "%s[0]" % memberFullName
|
||||
genInternalCompoundType(member, nestedOutName, inParamName, currLoopVar)
|
||||
cgen.stmt("%s.%s = &%s" % (outName, member.paramName, nestedOutName))
|
||||
else:
|
||||
cgen.beginBlock()
|
||||
genInternalCompoundType(member, nestedOutName, memberFullName, currLoopVar)
|
||||
cgen.stmt("%s.%s = %s" % (outName, member.paramName, nestedOutName))
|
||||
cgen.endBlock()
|
||||
else:
|
||||
# Replace member with internal object
|
||||
replaceName = "%s.%s" % (outName, member.paramName)
|
||||
if member.isOptional:
|
||||
cgen.beginIf(replaceName)
|
||||
gfxstreamObject = genVkFromHandle(member, replaceName)
|
||||
cgen.stmt("%s = %s->%s" % (replaceName, gfxstreamObject, INTERNAL_OBJECT_NAME))
|
||||
if member.isOptional:
|
||||
cgen.endIf()
|
||||
|
||||
def genInternalArray(param, countParamName, outArrayName, inArrayName, loopVar):
|
||||
cgen.beginFor("uint32_t %s = 0" % loopVar, "%s < %s" % (loopVar, countParamName), "++%s" % loopVar)
|
||||
if param.isOptional:
|
||||
cgen.beginIf(inArrayName)
|
||||
if isCompoundType(param.typeName):
|
||||
genInternalCompoundType(param, ("%s[%s]" % (outArrayName, loopVar)), "%s[%s]" % (inArrayName, loopVar), loopVar)
|
||||
else:
|
||||
gfxstreamObject = genVkFromHandle(param, "%s[%s]" % (inArrayName, loopVar))
|
||||
cgen.stmt("%s[%s] = %s->%s" % (outArrayName, loopVar, gfxstreamObject, INTERNAL_OBJECT_NAME))
|
||||
if param.isOptional:
|
||||
cgen.endIf()
|
||||
cgen.endFor()
|
||||
return "%s.data()" % outArrayName
|
||||
|
||||
# Translate params into params needed for gfxstream-internal
|
||||
# encoder/resource-tracker calls
|
||||
def getEncoderOrResourceTrackerParams():
|
||||
createParam = getCreateParam(api)
|
||||
outParams = copy.deepcopy(api.parameters)
|
||||
nextLoopVar = getNextLoopVar()
|
||||
for param in outParams:
|
||||
if not translationRequired(param.typeName):
|
||||
continue
|
||||
elif isArrayParam(param) or isCompoundType(param.typeName):
|
||||
if param.possiblyOutput():
|
||||
print("ERROR: Unhandled CompoundType / Array output for API %s (param %s)" % (api.name, param.paramName))
|
||||
raise
|
||||
if 1 != param.pointerIndirectionLevels or not param.isConst:
|
||||
print("ERROR: Compound type / array input is not 'const <type>*' (API: %s, paramName: %s)" % (api.name, param.paramName))
|
||||
raise
|
||||
countParamName = "1"
|
||||
if "len" in param.attribs:
|
||||
countParamName = param.attribs["len"]
|
||||
internalArrayName = genInternalArrayDeclarations(param, countParamName)
|
||||
param.paramName = genInternalArray(param, countParamName, internalArrayName, param.paramName, nextLoopVar)
|
||||
elif 0 == param.pointerIndirectionLevels:
|
||||
if param.isOptional:
|
||||
param.paramName = ("%s ? %s->%s : VK_NULL_HANDLE" % (paramNameToObjectName(param.paramName), paramNameToObjectName(param.paramName), INTERNAL_OBJECT_NAME))
|
||||
else:
|
||||
param.paramName = ("%s->%s" % (paramNameToObjectName(param.paramName), INTERNAL_OBJECT_NAME))
|
||||
elif createParam and param.paramName == createParam.paramName:
|
||||
param.paramName = ("&%s->%s" % (paramNameToObjectName(param.paramName), INTERNAL_OBJECT_NAME))
|
||||
else:
|
||||
print("ERROR: Unknown handling for param: %s (API: %s)" % (param, api.name))
|
||||
raise
|
||||
return outParams
|
||||
|
||||
def genEncoderOrResourceTrackerCall(declareResources=True):
|
||||
if is_cmdbuf_dispatch(api):
|
||||
cgen.stmt("auto vkEnc = gfxstream::vk::ResourceTracker::getCommandBufferEncoder(%s->%s)" % (paramNameToObjectName(api.parameters[0].paramName), INTERNAL_OBJECT_NAME))
|
||||
elif is_queue_dispatch(api):
|
||||
cgen.stmt("auto vkEnc = gfxstream::vk::ResourceTracker::getQueueEncoder(%s->%s)" % (paramNameToObjectName(api.parameters[0].paramName), INTERNAL_OBJECT_NAME))
|
||||
else:
|
||||
cgen.stmt("auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder()")
|
||||
callLhs = None
|
||||
retTypeName = api.getRetTypeExpr()
|
||||
if retTypeName != "void":
|
||||
retVar = api.getRetVarExpr()
|
||||
cgen.stmt("%s %s = (%s)0" % (retTypeName, retVar, retTypeName))
|
||||
callLhs = retVar
|
||||
callLhs = api.getRetVarExpr()
|
||||
|
||||
# Get parameter list modded for gfxstream-internal call
|
||||
parameters = getEncoderOrResourceTrackerParams()
|
||||
if name in RESOURCE_TRACKER_ENTRIES:
|
||||
if declareResources:
|
||||
cgen.stmt("auto resources = ResourceTracker::get()")
|
||||
cgen.stmt("auto resources = gfxstream::vk::ResourceTracker::get()")
|
||||
cgen.funcCall(
|
||||
callLhs, "resources->" + "on_" + api.name,
|
||||
["vkEnc"] + SUCCESS_VAL.get(retTypeName, []) + \
|
||||
[p.paramName for p in api.parameters])
|
||||
[p.paramName for p in parameters])
|
||||
else:
|
||||
cgen.funcCall(
|
||||
callLhs, "vkEnc->" + api.name, [p.paramName for p in api.parameters] + ["true /* do lock */"])
|
||||
callLhs, "vkEnc->" + api.name, [p.paramName for p in parameters] + ["true /* do lock */"])
|
||||
|
||||
if name in POSTPROCESSES:
|
||||
cgen.line(POSTPROCESSES[name])
|
||||
def genReturnExpression():
|
||||
retTypeName = api.getRetTypeExpr()
|
||||
# Set the createParam output, if applicable
|
||||
createParam = getCreateParam(api)
|
||||
if createParam and handleTranslationRequired(createParam.typeName):
|
||||
if 1 != createParam.pointerIndirectionLevels:
|
||||
print("ERROR: Unhandled pointerIndirectionLevels != 1 in return for API %s (createParam %s)" % api.name, createParam.paramName)
|
||||
raise
|
||||
# ex: *pBuffer = gfxstream_vk_buffer_to_handle(gfxstream_buffer)
|
||||
cgen.funcCall(
|
||||
"*%s" % createParam.paramName,
|
||||
"%s_to_handle" % typeNameToObjectType(createParam.typeName),
|
||||
[paramNameToObjectName(createParam.paramName)]
|
||||
)
|
||||
|
||||
if retTypeName != "void":
|
||||
cgen.stmt("return %s" % retVar)
|
||||
cgen.stmt("return %s" % api.getRetVarExpr())
|
||||
|
||||
|
||||
api_entry = api.withModifiedName("entry_" + api.name)
|
||||
|
||||
cgen.line("static " + self.cgen.makeFuncProto(api_entry))
|
||||
cgen.beginBlock()
|
||||
genEncoderOrResourceTrackerCall(cgen, api)
|
||||
cgen.endBlock()
|
||||
|
||||
if self.isDeviceDispatch(api) and self.feature != "VK_VERSION_1_0":
|
||||
api_entry_dyn_check = api.withModifiedName("dynCheck_entry_" + api.name)
|
||||
cgen.line("static " + self.cgen.makeFuncProto(api_entry_dyn_check))
|
||||
cgen.beginBlock()
|
||||
if self.feature == "VK_VERSION_1_3":
|
||||
cgen.stmt("auto resources = ResourceTracker::get()")
|
||||
if "VkCommandBuffer" == api.parameters[0].typeName:
|
||||
cgen.stmt("VkDevice device = resources->getDevice(commandBuffer)")
|
||||
cgen.beginIf("resources->getApiVersionFromDevice(device) < VK_API_VERSION_1_3")
|
||||
cgen.stmt("sOnInvalidDynamicallyCheckedCall(\"%s\", \"%s\")" % (api.name, self.feature))
|
||||
cgen.endIf()
|
||||
elif self.feature == "VK_VERSION_1_2":
|
||||
cgen.stmt("auto resources = ResourceTracker::get()")
|
||||
if "VkCommandBuffer" == api.parameters[0].typeName:
|
||||
cgen.stmt("VkDevice device = resources->getDevice(commandBuffer)")
|
||||
cgen.beginIf("resources->getApiVersionFromDevice(device) < VK_API_VERSION_1_2")
|
||||
cgen.stmt("sOnInvalidDynamicallyCheckedCall(\"%s\", \"%s\")" % (api.name, self.feature))
|
||||
cgen.endIf()
|
||||
elif self.feature == "VK_VERSION_1_1":
|
||||
cgen.stmt("auto resources = ResourceTracker::get()")
|
||||
if "VkCommandBuffer" == api.parameters[0].typeName:
|
||||
cgen.stmt("VkDevice device = resources->getDevice(commandBuffer)")
|
||||
cgen.beginIf("resources->getApiVersionFromDevice(device) < VK_API_VERSION_1_1")
|
||||
cgen.stmt("sOnInvalidDynamicallyCheckedCall(\"%s\", \"%s\")" % (api.name, self.feature))
|
||||
cgen.endIf()
|
||||
elif self.feature != "VK_VERSION_1_0":
|
||||
cgen.stmt("auto resources = ResourceTracker::get()")
|
||||
if "VkCommandBuffer" == api.parameters[0].typeName:
|
||||
cgen.stmt("VkDevice device = resources->getDevice(commandBuffer);")
|
||||
cgen.beginIf("!resources->hasDeviceExtension(device, \"%s\")" % self.feature)
|
||||
cgen.stmt("sOnInvalidDynamicallyCheckedCall(\"%s\", \"%s\")" % (api.name, self.feature))
|
||||
def genGfxstreamEntry(declareResources=True):
|
||||
cgen.stmt("AEMU_SCOPED_TRACE(\"%s\")" % api.name)
|
||||
# declare returnVar
|
||||
retTypeName = api.getRetTypeExpr()
|
||||
retVar = api.getRetVarExpr()
|
||||
if retVar:
|
||||
cgen.stmt("%s %s = (%s)0" % (retTypeName, retVar, retTypeName))
|
||||
# Check non-null destroy param for free/destroy calls
|
||||
destroyParam = getDestroyParam(api)
|
||||
if destroyParam:
|
||||
cgen.beginIf("VK_NULL_HANDLE == %s" % destroyParam.paramName)
|
||||
if api.getRetTypeExpr() != "void":
|
||||
cgen.stmt("return %s" % api.getRetVarExpr())
|
||||
else:
|
||||
cgen.stmt("return")
|
||||
cgen.endIf()
|
||||
# Translate handles
|
||||
genGetGfxstreamHandles()
|
||||
# Translation/creation of objects
|
||||
createdObject = genCreateGfxstreamObjects()
|
||||
# Make encoder/resource-tracker call
|
||||
if retVar and createdObject:
|
||||
cgen.beginIf("%s == %s" % (SUCCESS_VAL[retTypeName][0], retVar))
|
||||
else:
|
||||
print("About to generate a frivolous api!: dynCheck entry: %s" % api.name)
|
||||
raise
|
||||
genEncoderOrResourceTrackerCall(cgen, api, declareResources = False)
|
||||
cgen.beginBlock()
|
||||
genEncoderOrResourceTrackerCall()
|
||||
cgen.endBlock()
|
||||
# Destroy gfxstream objects
|
||||
genDestroyGfxstreamObjects()
|
||||
# Set output / return variables
|
||||
genReturnExpression()
|
||||
|
||||
api_entry = api.withModifiedName("gfxstream_vk_" + api.name[2:])
|
||||
if api.name not in HANDWRITTEN_ENTRY_POINTS:
|
||||
cgen.line(self.cgen.makeFuncProto(api_entry))
|
||||
cgen.beginBlock()
|
||||
genGfxstreamEntry()
|
||||
cgen.endBlock()
|
||||
self.module.appendImpl(cgen.swapCode())
|
||||
|
||||
self.module.appendImpl(cgen.swapCode())
|
||||
|
||||
def onEnd(self,):
|
||||
getProcAddressDecl = "void* goldfish_vulkan_get_proc_address(const char* name)"
|
||||
self.module.appendHeader(getProcAddressDecl + ";\n")
|
||||
self.module.appendImpl(getProcAddressDecl)
|
||||
self.cgen.beginBlock()
|
||||
|
||||
prevFeature = None
|
||||
for e, f in zip(self.entries, self.entryFeatures):
|
||||
featureEndif = prevFeature is not None and (f != prevFeature)
|
||||
featureif = not featureEndif and (f != prevFeature)
|
||||
|
||||
if featureEndif:
|
||||
self.cgen.leftline("#endif")
|
||||
self.cgen.leftline("#ifdef %s" % f)
|
||||
|
||||
if featureif:
|
||||
self.cgen.leftline("#ifdef %s" % f)
|
||||
|
||||
self.cgen.beginIf("!strcmp(name, \"%s\")" % e.name)
|
||||
if e.name in EXCLUDED_APIS:
|
||||
self.cgen.stmt("return nullptr")
|
||||
elif f == "VK_VERSION_1_3":
|
||||
self.cgen.stmt("return nullptr")
|
||||
elif f == "VK_VERSION_1_2":
|
||||
self.cgen.stmt("return nullptr")
|
||||
elif f == "VK_VERSION_1_1":
|
||||
self.cgen.stmt("return nullptr")
|
||||
elif f != "VK_VERSION_1_0":
|
||||
self.cgen.stmt("return nullptr")
|
||||
else:
|
||||
self.cgen.stmt("return (void*)%s" % ("entry_" + e.name))
|
||||
self.cgen.endIf()
|
||||
prevFeature = f
|
||||
|
||||
self.cgen.leftline("#endif")
|
||||
|
||||
self.cgen.stmt("return nullptr")
|
||||
self.cgen.endBlock()
|
||||
self.module.appendImpl(self.cgen.swapCode())
|
||||
|
||||
getInstanceProcAddressDecl = "void* goldfish_vulkan_get_instance_proc_address(VkInstance instance, const char* name)"
|
||||
self.module.appendHeader(getInstanceProcAddressDecl + ";\n")
|
||||
self.module.appendImpl(getInstanceProcAddressDecl)
|
||||
self.cgen.beginBlock()
|
||||
|
||||
self.cgen.stmt(
|
||||
"auto resources = ResourceTracker::get()")
|
||||
self.cgen.stmt(
|
||||
"bool has1_1OrHigher = resources->getApiVersionFromInstance(instance) >= VK_API_VERSION_1_1")
|
||||
self.cgen.stmt(
|
||||
"bool has1_2OrHigher = resources->getApiVersionFromInstance(instance) >= VK_API_VERSION_1_2")
|
||||
self.cgen.stmt(
|
||||
"bool has1_3OrHigher = resources->getApiVersionFromInstance(instance) >= VK_API_VERSION_1_3")
|
||||
|
||||
prevFeature = None
|
||||
for e, f in zip(self.entries, self.entryFeatures):
|
||||
featureEndif = prevFeature is not None and (f != prevFeature)
|
||||
featureif = not featureEndif and (f != prevFeature)
|
||||
|
||||
if featureEndif:
|
||||
self.cgen.leftline("#endif")
|
||||
self.cgen.leftline("#ifdef %s" % f)
|
||||
|
||||
if featureif:
|
||||
self.cgen.leftline("#ifdef %s" % f)
|
||||
|
||||
self.cgen.beginIf("!strcmp(name, \"%s\")" % e.name)
|
||||
|
||||
entryPointExpr = "(void*)%s" % ("entry_" + e.name)
|
||||
|
||||
if e.name in EXCLUDED_APIS:
|
||||
self.cgen.stmt("return nullptr")
|
||||
elif f == "VK_VERSION_1_3":
|
||||
if self.isDeviceDispatch(e):
|
||||
self.cgen.stmt("return (void*)dynCheck_entry_%s" % e.name)
|
||||
else:
|
||||
self.cgen.stmt( \
|
||||
"return has1_3OrHigher ? %s : nullptr" % \
|
||||
entryPointExpr)
|
||||
elif f == "VK_VERSION_1_2":
|
||||
if self.isDeviceDispatch(e):
|
||||
self.cgen.stmt("return (void*)dynCheck_entry_%s" % e.name)
|
||||
else:
|
||||
self.cgen.stmt( \
|
||||
"return has1_2OrHigher ? %s : nullptr" % \
|
||||
entryPointExpr)
|
||||
elif f == "VK_VERSION_1_1":
|
||||
if self.isDeviceDispatch(e):
|
||||
self.cgen.stmt("return (void*)dynCheck_entry_%s" % e.name)
|
||||
else:
|
||||
self.cgen.stmt( \
|
||||
"return has1_1OrHigher ? %s : nullptr" % \
|
||||
entryPointExpr)
|
||||
elif f != "VK_VERSION_1_0":
|
||||
entryNeedsInstanceExtensionCheck = self.cmdToFeatureType[e.name] == "instance"
|
||||
|
||||
entryPrefix = "dynCheck_" if self.isDeviceDispatch(e) else ""
|
||||
entryPointExpr = "(void*)%sentry_%s" % (entryPrefix, e.name)
|
||||
|
||||
if entryNeedsInstanceExtensionCheck:
|
||||
self.cgen.stmt("bool hasExt = resources->hasInstanceExtension(instance, \"%s\")" % f)
|
||||
self.cgen.stmt("return hasExt ? %s : nullptr" % entryPointExpr)
|
||||
else:
|
||||
# TODO(b/236246382): We need to check the device extension support here.
|
||||
self.cgen.stmt("// TODO(b/236246382): Check support for device extension");
|
||||
self.cgen.stmt("return %s" % entryPointExpr)
|
||||
|
||||
else:
|
||||
self.cgen.stmt("return %s" % entryPointExpr)
|
||||
self.cgen.endIf()
|
||||
prevFeature = f
|
||||
|
||||
self.cgen.leftline("#endif")
|
||||
|
||||
self.cgen.stmt("return nullptr")
|
||||
self.cgen.endBlock()
|
||||
self.module.appendImpl(self.cgen.swapCode())
|
||||
|
||||
getDeviceProcAddressDecl = "void* goldfish_vulkan_get_device_proc_address(VkDevice device, const char* name)"
|
||||
self.module.appendHeader(getDeviceProcAddressDecl + ";\n")
|
||||
self.module.appendImpl(getDeviceProcAddressDecl)
|
||||
self.cgen.beginBlock()
|
||||
|
||||
self.cgen.stmt(
|
||||
"auto resources = ResourceTracker::get()")
|
||||
self.cgen.stmt(
|
||||
"bool has1_1OrHigher = resources->getApiVersionFromDevice(device) >= VK_API_VERSION_1_1")
|
||||
self.cgen.stmt(
|
||||
"bool has1_2OrHigher = resources->getApiVersionFromDevice(device) >= VK_API_VERSION_1_2")
|
||||
self.cgen.stmt(
|
||||
"bool has1_3OrHigher = resources->getApiVersionFromDevice(device) >= VK_API_VERSION_1_3")
|
||||
prevFeature = None
|
||||
for e, f in zip(self.entries, self.entryFeatures):
|
||||
featureEndif = prevFeature is not None and (f != prevFeature)
|
||||
featureif = not featureEndif and (f != prevFeature)
|
||||
|
||||
if featureEndif:
|
||||
self.cgen.leftline("#endif")
|
||||
self.cgen.leftline("#ifdef %s" % f)
|
||||
|
||||
if featureif:
|
||||
self.cgen.leftline("#ifdef %s" % f)
|
||||
|
||||
self.cgen.beginIf("!strcmp(name, \"%s\")" % e.name)
|
||||
|
||||
entryPointExpr = "(void*)%s" % ("entry_" + e.name)
|
||||
|
||||
if e.name in EXCLUDED_APIS:
|
||||
self.cgen.stmt("return nullptr")
|
||||
elif f == "VK_VERSION_1_3":
|
||||
self.cgen.stmt( \
|
||||
"return has1_3OrHigher ? %s : nullptr" % \
|
||||
entryPointExpr)
|
||||
elif f == "VK_VERSION_1_2":
|
||||
self.cgen.stmt( \
|
||||
"return has1_2OrHigher ? %s : nullptr" % \
|
||||
entryPointExpr)
|
||||
elif f == "VK_VERSION_1_1":
|
||||
self.cgen.stmt( \
|
||||
"return has1_1OrHigher ? %s : nullptr" % \
|
||||
entryPointExpr)
|
||||
elif f != "VK_VERSION_1_0":
|
||||
self.cgen.stmt( \
|
||||
"bool hasExt = resources->hasDeviceExtension(device, \"%s\")" % f)
|
||||
self.cgen.stmt("return hasExt ? %s : nullptr" % entryPointExpr)
|
||||
else:
|
||||
self.cgen.stmt("return %s" % entryPointExpr)
|
||||
self.cgen.endIf()
|
||||
prevFeature = f
|
||||
|
||||
self.cgen.leftline("#endif")
|
||||
|
||||
self.cgen.stmt("return nullptr")
|
||||
self.cgen.endBlock()
|
||||
|
||||
self.module.appendImpl(self.cgen.swapCode())
|
||||
pass
|
||||
|
||||
def isDeviceDispatch(self, api):
|
||||
# TODO(230793667): improve the heuristic and just use "cmdToFeatureType"
|
||||
|
|
|
|||
|
|
@ -77,6 +77,7 @@ SUPPORTED_FEATURES = [
|
|||
"VK_KHR_create_renderpass2",
|
||||
"VK_KHR_imageless_framebuffer",
|
||||
"VK_KHR_descriptor_update_template",
|
||||
"VK_EXT_depth_clip_enable",
|
||||
# see aosp/2736079 + b/268351352
|
||||
"VK_EXT_swapchain_maintenance1",
|
||||
"VK_KHR_maintenance5",
|
||||
|
|
@ -128,6 +129,8 @@ SUPPORTED_FEATURES = [
|
|||
"VK_EXT_graphics_pipeline_library",
|
||||
# Used by guest ANGLE
|
||||
"VK_EXT_vertex_attribute_divisor",
|
||||
# QNX
|
||||
"VK_QNX_external_memory_screen_buffer",
|
||||
]
|
||||
|
||||
HOST_MODULES = ["goldfish_vk_extension_structs", "goldfish_vk_marshaling",
|
||||
|
|
@ -151,6 +154,7 @@ SUPPORTED_MODULES = {
|
|||
"VK_KHR_external_semaphore_win32" : ["goldfish_vk_dispatch"],
|
||||
"VK_KHR_external_memory_win32" : ["goldfish_vk_dispatch"],
|
||||
"VK_KHR_external_memory_fd": ["goldfish_vk_dispatch"],
|
||||
"VK_QNX_external_memory_screen_buffer": ["goldfish_vk_dispatch"],
|
||||
"VK_ANDROID_external_memory_android_hardware_buffer": ["func_table"],
|
||||
"VK_KHR_android_surface": ["func_table"],
|
||||
"VK_EXT_swapchain_maintenance1" : HOST_MODULES,
|
||||
|
|
@ -343,6 +347,8 @@ class IOStream;
|
|||
#include "VkEncoder.h"
|
||||
#include "../OpenglSystemCommon/HostConnection.h"
|
||||
#include "ResourceTracker.h"
|
||||
#include "gfxstream_vk_entrypoints.h"
|
||||
#include "gfxstream_vk_private.h"
|
||||
|
||||
#include "goldfish_vk_private_defs.h"
|
||||
|
||||
|
|
@ -603,7 +609,8 @@ class BumpPool;
|
|||
suppressVulkanHeaders=True,
|
||||
extraHeader=createVkExtensionStructureTypePreamble('VK_GOOGLE_GFXSTREAM'))
|
||||
|
||||
self.addGuestEncoderModule("func_table", extraImpl=functableImplInclude)
|
||||
self.addGuestEncoderModule("func_table", extraImpl=functableImplInclude, implOnly = True,
|
||||
useNamespace = False)
|
||||
|
||||
self.addCppModule("common", "goldfish_vk_extension_structs",
|
||||
extraHeader=extensionStructsInclude)
|
||||
|
|
@ -695,13 +702,13 @@ class BumpPool;
|
|||
|
||||
def addGuestEncoderModule(
|
||||
self, basename, extraHeader="", extraImpl="", useNamespace=True, headerOnly=False,
|
||||
suppressFeatureGuards=False, moduleName=None, suppressVulkanHeaders=False):
|
||||
suppressFeatureGuards=False, moduleName=None, suppressVulkanHeaders=False, implOnly=False):
|
||||
if not os.path.exists(self.guest_abs_encoder_destination):
|
||||
print("Path [%s] not found (guest encoder path), skipping" % self.guest_abs_encoder_destination)
|
||||
return
|
||||
self.addCppModule(self.guest_encoder_tag, basename, extraHeader=extraHeader,
|
||||
extraImpl=extraImpl, customAbsDir=self.guest_abs_encoder_destination,
|
||||
useNamespace=useNamespace, headerOnly=headerOnly,
|
||||
useNamespace=useNamespace, implOnly=implOnly, headerOnly=headerOnly,
|
||||
suppressFeatureGuards=suppressFeatureGuards, moduleName=moduleName,
|
||||
suppressVulkanHeaders=suppressVulkanHeaders)
|
||||
|
||||
|
|
|
|||
|
|
@ -54,8 +54,7 @@ class FuchsiaVirtGpuDevice : public VirtGpuDevice {
|
|||
struct VirtGpuCaps getCaps(void) override;
|
||||
|
||||
VirtGpuBlobPtr createBlob(const struct VirtGpuCreateBlob& blobCreate) override;
|
||||
VirtGpuBlobPtr createPipeBlob(uint32_t size) override;
|
||||
VirtGpuBlobPtr createPipeTexture2D(uint32_t width, uint32_t height, uint32_t format) override;
|
||||
VirtGpuBlobPtr createVirglBlob(uint32_t width, uint32_t height, uint32_t format) override;
|
||||
VirtGpuBlobPtr importBlob(const struct VirtGpuExternalHandle& handle) override;
|
||||
|
||||
int execBuffer(struct VirtGpuExecBuffer& execbuffer, VirtGpuBlobPtr blob) override;
|
||||
|
|
|
|||
|
|
@ -27,18 +27,13 @@ int64_t FuchsiaVirtGpuDevice::getDeviceHandle(void) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
VirtGpuBlobPtr FuchsiaVirtGpuDevice::createPipeBlob(uint32_t size) {
|
||||
ALOGE("%s: unimplemented", __func__);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
VirtGpuBlobPtr FuchsiaVirtGpuDevice::createBlob(const struct VirtGpuCreateBlob& blobCreate) {
|
||||
ALOGE("%s: unimplemented", __func__);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
VirtGpuBlobPtr FuchsiaVirtGpuDevice::createPipeTexture2D(uint32_t width, uint32_t height,
|
||||
uint32_t format) {
|
||||
VirtGpuBlobPtr FuchsiaVirtGpuDevice::createVirglBlob(uint32_t width, uint32_t height,
|
||||
uint32_t virglFormat) {
|
||||
ALOGE("%s: unimplemented", __func__);
|
||||
return nullptr;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,6 +21,18 @@
|
|||
|
||||
#include "virtgpu_gfxstream_protocol.h"
|
||||
|
||||
// See virgl_hw.h and p_defines.h
|
||||
#define VIRGL_FORMAT_R8_UNORM 64
|
||||
#define VIRGL_FORMAT_B8G8R8A8_UNORM 1
|
||||
#define VIRGL_FORMAT_B5G6R5_UNORM 7
|
||||
#define VIRGL_FORMAT_R8G8B8_UNORM 66
|
||||
#define VIRGL_FORMAT_R8G8B8A8_UNORM 67
|
||||
|
||||
#define VIRGL_BIND_RENDER_TARGET (1 << 1)
|
||||
#define VIRGL_BIND_CUSTOM (1 << 17)
|
||||
#define PIPE_BUFFER 0
|
||||
#define PIPE_TEXTURE_2D 2
|
||||
|
||||
enum VirtGpuParamId : uint32_t {
|
||||
kParam3D = 0,
|
||||
kParamCapsetFix = 1,
|
||||
|
|
@ -157,8 +169,7 @@ class VirtGpuDevice {
|
|||
virtual struct VirtGpuCaps getCaps(void) = 0;
|
||||
|
||||
virtual VirtGpuBlobPtr createBlob(const struct VirtGpuCreateBlob& blobCreate) = 0;
|
||||
virtual VirtGpuBlobPtr createPipeBlob(uint32_t size) = 0;
|
||||
virtual VirtGpuBlobPtr createPipeTexture2D(uint32_t width, uint32_t height, uint32_t format) = 0;
|
||||
virtual VirtGpuBlobPtr createVirglBlob(uint32_t width, uint32_t height, uint32_t virglFormat) = 0;
|
||||
virtual VirtGpuBlobPtr importBlob(const struct VirtGpuExternalHandle& handle) = 0;
|
||||
|
||||
virtual int execBuffer(struct VirtGpuExecBuffer& execbuffer, VirtGpuBlobPtr blob) = 0;
|
||||
|
|
|
|||
|
|
@ -35,16 +35,10 @@ int LinuxSyncHelper::wait(int syncFd, int timeoutMilliseconds) {
|
|||
#endif
|
||||
}
|
||||
|
||||
int LinuxSyncHelper::dup(int syncFd) {
|
||||
return ::dup(syncFd);
|
||||
}
|
||||
int LinuxSyncHelper::dup(int syncFd) { return ::dup(syncFd); }
|
||||
|
||||
int LinuxSyncHelper::close(int syncFd) {
|
||||
return ::close(syncFd);
|
||||
}
|
||||
int LinuxSyncHelper::close(int syncFd) { return ::close(syncFd); }
|
||||
|
||||
SyncHelper* createPlatformSyncHelper() {
|
||||
return new LinuxSyncHelper();
|
||||
}
|
||||
SyncHelper* createPlatformSyncHelper() { return new LinuxSyncHelper(); }
|
||||
|
||||
} // namespace gfxstream
|
||||
|
|
|
|||
|
|
@ -19,8 +19,9 @@
|
|||
#include "VirtGpu.h"
|
||||
|
||||
class LinuxVirtGpuBlob : public std::enable_shared_from_this<LinuxVirtGpuBlob>, public VirtGpuBlob {
|
||||
public:
|
||||
LinuxVirtGpuBlob(int64_t deviceHandle, uint32_t blobHandle, uint32_t resourceHandle, uint64_t size);
|
||||
public:
|
||||
LinuxVirtGpuBlob(int64_t deviceHandle, uint32_t blobHandle, uint32_t resourceHandle,
|
||||
uint64_t size);
|
||||
~LinuxVirtGpuBlob();
|
||||
|
||||
uint32_t getResourceHandle(void) override;
|
||||
|
|
@ -33,7 +34,7 @@ class LinuxVirtGpuBlob : public std::enable_shared_from_this<LinuxVirtGpuBlob>,
|
|||
int transferFromHost(uint32_t offset, uint32_t size) override;
|
||||
int transferToHost(uint32_t offset, uint32_t size) override;
|
||||
|
||||
private:
|
||||
private:
|
||||
// Not owned. Really should use a ScopedFD for this, but doesn't matter since we have a
|
||||
// singleton deviceimplemenentation anyways.
|
||||
int64_t mDeviceHandle;
|
||||
|
|
@ -44,20 +45,20 @@ class LinuxVirtGpuBlob : public std::enable_shared_from_this<LinuxVirtGpuBlob>,
|
|||
};
|
||||
|
||||
class LinuxVirtGpuBlobMapping : public VirtGpuBlobMapping {
|
||||
public:
|
||||
public:
|
||||
LinuxVirtGpuBlobMapping(VirtGpuBlobPtr blob, uint8_t* ptr, uint64_t size);
|
||||
~LinuxVirtGpuBlobMapping(void);
|
||||
|
||||
uint8_t* asRawPtr(void) override;
|
||||
|
||||
private:
|
||||
private:
|
||||
VirtGpuBlobPtr mBlob;
|
||||
uint8_t* mPtr;
|
||||
uint64_t mSize;
|
||||
};
|
||||
|
||||
class LinuxVirtGpuDevice : public VirtGpuDevice {
|
||||
public:
|
||||
public:
|
||||
LinuxVirtGpuDevice(enum VirtGpuCapset capset, int fd = -1);
|
||||
virtual ~LinuxVirtGpuDevice();
|
||||
|
||||
|
|
@ -66,14 +67,12 @@ class LinuxVirtGpuDevice : public VirtGpuDevice {
|
|||
virtual struct VirtGpuCaps getCaps(void);
|
||||
|
||||
VirtGpuBlobPtr createBlob(const struct VirtGpuCreateBlob& blobCreate) override;
|
||||
VirtGpuBlobPtr createPipeBlob(uint32_t size) override;
|
||||
VirtGpuBlobPtr createPipeTexture2D(uint32_t width, uint32_t height, uint32_t format) override;
|
||||
VirtGpuBlobPtr createVirglBlob(uint32_t width, uint32_t height, uint32_t virglFormat) override;
|
||||
|
||||
virtual VirtGpuBlobPtr importBlob(const struct VirtGpuExternalHandle& handle);
|
||||
|
||||
virtual int execBuffer(struct VirtGpuExecBuffer& execbuffer, VirtGpuBlobPtr blob);
|
||||
|
||||
private:
|
||||
int64_t mDeviceHandle;
|
||||
struct VirtGpuCaps mCaps;
|
||||
private:
|
||||
int64_t mDeviceHandle;
|
||||
struct VirtGpuCaps mCaps;
|
||||
};
|
||||
|
|
|
|||
|
|
@ -14,20 +14,20 @@
|
|||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#include <cutils/log.h>
|
||||
#include <fcntl.h>
|
||||
#include <sys/mman.h>
|
||||
#include <unistd.h>
|
||||
#include <xf86drm.h>
|
||||
|
||||
#include <cerrno>
|
||||
#include <cstring>
|
||||
|
||||
#include <cutils/log.h>
|
||||
|
||||
#include "LinuxVirtGpu.h"
|
||||
#include "virtgpu_drm.h"
|
||||
|
||||
LinuxVirtGpuBlob::LinuxVirtGpuBlob(int64_t deviceHandle, uint32_t blobHandle, uint32_t resourceHandle,
|
||||
uint64_t size)
|
||||
LinuxVirtGpuBlob::LinuxVirtGpuBlob(int64_t deviceHandle, uint32_t blobHandle,
|
||||
uint32_t resourceHandle, uint64_t size)
|
||||
: mDeviceHandle(deviceHandle),
|
||||
mBlobHandle(blobHandle),
|
||||
mResourceHandle(resourceHandle),
|
||||
|
|
@ -45,13 +45,9 @@ LinuxVirtGpuBlob::~LinuxVirtGpuBlob(void) {
|
|||
}
|
||||
}
|
||||
|
||||
uint32_t LinuxVirtGpuBlob::getBlobHandle(void) {
|
||||
return mBlobHandle;
|
||||
}
|
||||
uint32_t LinuxVirtGpuBlob::getBlobHandle(void) { return mBlobHandle; }
|
||||
|
||||
uint32_t LinuxVirtGpuBlob::getResourceHandle(void) {
|
||||
return mResourceHandle;
|
||||
}
|
||||
uint32_t LinuxVirtGpuBlob::getResourceHandle(void) { return mResourceHandle; }
|
||||
|
||||
VirtGpuBlobMappingPtr LinuxVirtGpuBlob::createMapping(void) {
|
||||
int ret;
|
||||
|
|
@ -66,7 +62,7 @@ VirtGpuBlobMappingPtr LinuxVirtGpuBlob::createMapping(void) {
|
|||
}
|
||||
|
||||
uint8_t* ptr = static_cast<uint8_t*>(
|
||||
mmap64(nullptr, mSize, PROT_WRITE | PROT_READ, MAP_SHARED, mDeviceHandle, map.offset));
|
||||
mmap64(nullptr, mSize, PROT_WRITE | PROT_READ, MAP_SHARED, mDeviceHandle, map.offset));
|
||||
|
||||
if (ptr == MAP_FAILED) {
|
||||
ALOGE("mmap64 failed with (%s)", strerror(errno));
|
||||
|
|
@ -79,7 +75,8 @@ VirtGpuBlobMappingPtr LinuxVirtGpuBlob::createMapping(void) {
|
|||
int LinuxVirtGpuBlob::exportBlob(struct VirtGpuExternalHandle& handle) {
|
||||
int ret, fd;
|
||||
|
||||
ret = drmPrimeHandleToFD(mDeviceHandle, mBlobHandle, DRM_CLOEXEC | DRM_RDWR, &fd);
|
||||
uint32_t flags = DRM_CLOEXEC;
|
||||
ret = drmPrimeHandleToFD(mDeviceHandle, mBlobHandle, flags, &fd);
|
||||
if (ret) {
|
||||
ALOGE("drmPrimeHandleToFD failed with %s", strerror(errno));
|
||||
return ret;
|
||||
|
|
@ -150,4 +147,4 @@ int LinuxVirtGpuBlob::transferFromHost(uint32_t offset, uint32_t size) {
|
|||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,10 +21,6 @@
|
|||
LinuxVirtGpuBlobMapping::LinuxVirtGpuBlobMapping(VirtGpuBlobPtr blob, uint8_t* ptr, uint64_t size)
|
||||
: mBlob(blob), mPtr(ptr), mSize(size) {}
|
||||
|
||||
LinuxVirtGpuBlobMapping::~LinuxVirtGpuBlobMapping(void) {
|
||||
munmap(mPtr, mSize);
|
||||
}
|
||||
LinuxVirtGpuBlobMapping::~LinuxVirtGpuBlobMapping(void) { munmap(mPtr, mSize); }
|
||||
|
||||
uint8_t* LinuxVirtGpuBlobMapping::asRawPtr(void) {
|
||||
return mPtr;
|
||||
}
|
||||
uint8_t* LinuxVirtGpuBlobMapping::asRawPtr(void) { return mPtr; }
|
||||
|
|
|
|||
|
|
@ -34,10 +34,14 @@
|
|||
#define PARAM(x) \
|
||||
(struct VirtGpuParam) { x, #x, 0 }
|
||||
|
||||
// See virgl_hw.h and p_defines.h
|
||||
#define VIRGL_FORMAT_R8_UNORM 64
|
||||
#define VIRGL_BIND_CUSTOM (1 << 17)
|
||||
#define PIPE_BUFFER 0
|
||||
#if defined(PAGE_SIZE) && defined(VIRTIO_GPU)
|
||||
constexpr size_t kPageSize = PAGE_SIZE;
|
||||
#else
|
||||
#include <unistd.h>
|
||||
static const size_t kPageSize = getpagesize();
|
||||
#endif
|
||||
|
||||
static inline uint32_t align_up(uint32_t n, uint32_t a) { return ((n + a - 1) / a) * a; }
|
||||
|
||||
LinuxVirtGpuDevice::LinuxVirtGpuDevice(enum VirtGpuCapset capset, int fd) : VirtGpuDevice(capset) {
|
||||
struct VirtGpuParam params[] = {
|
||||
|
|
@ -52,7 +56,7 @@ LinuxVirtGpuDevice::LinuxVirtGpuDevice(enum VirtGpuCapset capset, int fd) : Virt
|
|||
struct drm_virtgpu_get_caps get_caps = {0};
|
||||
struct drm_virtgpu_context_init init = {0};
|
||||
struct drm_virtgpu_context_set_param ctx_set_params[3] = {{0}};
|
||||
const char *processName = nullptr;
|
||||
const char* processName = nullptr;
|
||||
|
||||
memset(&mCaps, 0, sizeof(struct VirtGpuCaps));
|
||||
|
||||
|
|
@ -142,31 +146,51 @@ LinuxVirtGpuDevice::LinuxVirtGpuDevice(enum VirtGpuCapset capset, int fd) : Virt
|
|||
ret = drmIoctl(mDeviceHandle, DRM_IOCTL_VIRTGPU_CONTEXT_INIT, &init);
|
||||
if (ret) {
|
||||
ALOGE("DRM_IOCTL_VIRTGPU_CONTEXT_INIT failed with %s, continuing without context...",
|
||||
strerror(errno));
|
||||
strerror(errno));
|
||||
}
|
||||
}
|
||||
|
||||
LinuxVirtGpuDevice::~LinuxVirtGpuDevice() {
|
||||
close(mDeviceHandle);
|
||||
}
|
||||
LinuxVirtGpuDevice::~LinuxVirtGpuDevice() { close(mDeviceHandle); }
|
||||
|
||||
struct VirtGpuCaps LinuxVirtGpuDevice::getCaps(void) { return mCaps; }
|
||||
|
||||
int64_t LinuxVirtGpuDevice::getDeviceHandle(void) {
|
||||
return mDeviceHandle;
|
||||
}
|
||||
int64_t LinuxVirtGpuDevice::getDeviceHandle(void) { return mDeviceHandle; }
|
||||
|
||||
VirtGpuBlobPtr LinuxVirtGpuDevice::createVirglBlob(uint32_t width, uint32_t height,
|
||||
uint32_t virglFormat) {
|
||||
uint32_t target = 0;
|
||||
uint32_t bind = 0;
|
||||
uint32_t bpp = 0;
|
||||
|
||||
switch (virglFormat) {
|
||||
case VIRGL_FORMAT_R8G8B8A8_UNORM:
|
||||
case VIRGL_FORMAT_B8G8R8A8_UNORM:
|
||||
target = PIPE_TEXTURE_2D;
|
||||
bind = VIRGL_BIND_RENDER_TARGET;
|
||||
bpp = 4;
|
||||
break;
|
||||
case VIRGL_FORMAT_R8_UNORM:
|
||||
target = PIPE_BUFFER;
|
||||
bind = VIRGL_BIND_CUSTOM;
|
||||
bpp = 1;
|
||||
break;
|
||||
default:
|
||||
ALOGE("Unknown virgl format");
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
VirtGpuBlobPtr LinuxVirtGpuDevice::createPipeBlob(uint32_t size) {
|
||||
drm_virtgpu_resource_create create = {
|
||||
.target = PIPE_BUFFER,
|
||||
.format = VIRGL_FORMAT_R8_UNORM,
|
||||
.bind = VIRGL_BIND_CUSTOM,
|
||||
.width = size,
|
||||
.height = 1U,
|
||||
.depth = 1U,
|
||||
.array_size = 0U,
|
||||
.size = size,
|
||||
.stride = size,
|
||||
.target = target,
|
||||
.format = virglFormat,
|
||||
.bind = bind,
|
||||
.width = width,
|
||||
.height = height,
|
||||
.depth = 1U,
|
||||
.array_size = 1U,
|
||||
.last_level = 0,
|
||||
.nr_samples = 0,
|
||||
.size = width * height * bpp,
|
||||
.stride = width * bpp,
|
||||
};
|
||||
|
||||
int ret = drmIoctl(mDeviceHandle, DRM_IOCTL_VIRTGPU_RESOURCE_CREATE, &create);
|
||||
|
|
@ -176,7 +200,7 @@ VirtGpuBlobPtr LinuxVirtGpuDevice::createPipeBlob(uint32_t size) {
|
|||
}
|
||||
|
||||
return std::make_shared<LinuxVirtGpuBlob>(mDeviceHandle, create.bo_handle, create.res_handle,
|
||||
static_cast<uint64_t>(size));
|
||||
static_cast<uint64_t>(create.size));
|
||||
}
|
||||
|
||||
VirtGpuBlobPtr LinuxVirtGpuDevice::createBlob(const struct VirtGpuCreateBlob& blobCreate) {
|
||||
|
|
@ -195,12 +219,7 @@ VirtGpuBlobPtr LinuxVirtGpuDevice::createBlob(const struct VirtGpuCreateBlob& bl
|
|||
}
|
||||
|
||||
return std::make_shared<LinuxVirtGpuBlob>(mDeviceHandle, create.bo_handle, create.res_handle,
|
||||
blobCreate.size);
|
||||
}
|
||||
|
||||
VirtGpuBlobPtr LinuxVirtGpuDevice::createPipeTexture2D(uint32_t, uint32_t, uint32_t) {
|
||||
ALOGE("Unimplemented LinuxVirtGpuDevice::createPipeTexture2D().");
|
||||
return nullptr;
|
||||
blobCreate.size);
|
||||
}
|
||||
|
||||
VirtGpuBlobPtr LinuxVirtGpuDevice::importBlob(const struct VirtGpuExternalHandle& handle) {
|
||||
|
|
@ -223,7 +242,7 @@ VirtGpuBlobPtr LinuxVirtGpuDevice::importBlob(const struct VirtGpuExternalHandle
|
|||
}
|
||||
|
||||
return std::make_shared<LinuxVirtGpuBlob>(mDeviceHandle, blobHandle, info.res_handle,
|
||||
static_cast<uint64_t>(info.size));
|
||||
static_cast<uint64_t>(info.size));
|
||||
}
|
||||
|
||||
int LinuxVirtGpuDevice::execBuffer(struct VirtGpuExecBuffer& execbuffer, VirtGpuBlobPtr blob) {
|
||||
|
|
|
|||
68
src/gfxstream/guest/vulkan/gfxstream_vk_android.cpp
Normal file
68
src/gfxstream/guest/vulkan/gfxstream_vk_android.cpp
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
/*
|
||||
* Copyright 2023 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
#include <errno.h>
|
||||
#include <hardware/hardware.h>
|
||||
#include <hardware/hwvulkan.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <vulkan/vk_icd.h>
|
||||
|
||||
#include "gfxstream_vk_entrypoints.h"
|
||||
#include "util/macros.h"
|
||||
|
||||
static int gfxstream_vk_hal_open(const struct hw_module_t* mod, const char* id,
|
||||
struct hw_device_t** dev);
|
||||
static int gfxstream_vk_hal_close(struct hw_device_t* dev);
|
||||
|
||||
static_assert(HWVULKAN_DISPATCH_MAGIC == ICD_LOADER_MAGIC, "");
|
||||
|
||||
hw_module_methods_t gfxstream_vk_hal_ops = {
|
||||
.open = gfxstream_vk_hal_open,
|
||||
};
|
||||
|
||||
PUBLIC struct hwvulkan_module_t HAL_MODULE_INFO_SYM = {
|
||||
.common =
|
||||
{
|
||||
.tag = HARDWARE_MODULE_TAG,
|
||||
.module_api_version = HWVULKAN_MODULE_API_VERSION_0_1,
|
||||
.hal_api_version = HARDWARE_MAKE_API_VERSION(1, 0),
|
||||
.id = HWVULKAN_HARDWARE_MODULE_ID,
|
||||
.name = "gfxstream Vulkan HAL",
|
||||
.author = "Android Open Source Project",
|
||||
.methods = &(gfxstream_vk_hal_ops),
|
||||
},
|
||||
};
|
||||
|
||||
static int gfxstream_vk_hal_open(const struct hw_module_t* mod, const char* id,
|
||||
struct hw_device_t** dev) {
|
||||
assert(mod == &HAL_MODULE_INFO_SYM.common);
|
||||
assert(strcmp(id, HWVULKAN_DEVICE_0) == 0);
|
||||
|
||||
hwvulkan_device_t* hal_dev = (hwvulkan_device_t*)calloc(1, sizeof(*hal_dev));
|
||||
if (!hal_dev) return -1;
|
||||
|
||||
*hal_dev = (hwvulkan_device_t){
|
||||
.common =
|
||||
{
|
||||
.tag = HARDWARE_DEVICE_TAG,
|
||||
.version = HWVULKAN_DEVICE_API_VERSION_0_1,
|
||||
.module = &HAL_MODULE_INFO_SYM.common,
|
||||
.close = gfxstream_vk_hal_close,
|
||||
},
|
||||
.EnumerateInstanceExtensionProperties = gfxstream_vk_EnumerateInstanceExtensionProperties,
|
||||
.CreateInstance = gfxstream_vk_CreateInstance,
|
||||
.GetInstanceProcAddr = gfxstream_vk_GetInstanceProcAddr,
|
||||
};
|
||||
|
||||
*dev = &hal_dev->common;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gfxstream_vk_hal_close(struct hw_device_t* dev) {
|
||||
/* hwvulkan.h claims that hw_device_t::close() is never called. */
|
||||
return -1;
|
||||
}
|
||||
188
src/gfxstream/guest/vulkan/gfxstream_vk_cmd.cpp
Normal file
188
src/gfxstream/guest/vulkan/gfxstream_vk_cmd.cpp
Normal file
|
|
@ -0,0 +1,188 @@
|
|||
// Copyright (C) 2023 The Android Open Source Project
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "ResourceTracker.h"
|
||||
#include "VkEncoder.h"
|
||||
#include "gfxstream_vk_private.h"
|
||||
|
||||
VkResult gfxstream_vk_CreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo,
|
||||
const VkAllocationCallbacks* pAllocator,
|
||||
VkCommandPool* pCommandPool) {
|
||||
AEMU_SCOPED_TRACE("vkCreateCommandPool");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VkResult result = (VkResult)0;
|
||||
struct gfxstream_vk_command_pool* gfxstream_pCommandPool =
|
||||
(gfxstream_vk_command_pool*)vk_zalloc2(&gfxstream_device->vk.alloc, pAllocator,
|
||||
sizeof(gfxstream_vk_command_pool), 8,
|
||||
VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
|
||||
result = gfxstream_pCommandPool ? VK_SUCCESS : VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
if (VK_SUCCESS == result) {
|
||||
result = vk_command_pool_init(&gfxstream_device->vk, &gfxstream_pCommandPool->vk,
|
||||
pCreateInfo, pAllocator);
|
||||
}
|
||||
if (VK_SUCCESS == result) {
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
result = vkEnc->vkCreateCommandPool(gfxstream_device->internal_object, pCreateInfo,
|
||||
pAllocator, &gfxstream_pCommandPool->internal_object,
|
||||
true /* do lock */);
|
||||
}
|
||||
*pCommandPool = gfxstream_vk_command_pool_to_handle(gfxstream_pCommandPool);
|
||||
return result;
|
||||
}
|
||||
|
||||
void gfxstream_vk_DestroyCommandPool(VkDevice device, VkCommandPool commandPool,
|
||||
const VkAllocationCallbacks* pAllocator) {
|
||||
AEMU_SCOPED_TRACE("vkDestroyCommandPool");
|
||||
if (VK_NULL_HANDLE == commandPool) {
|
||||
return;
|
||||
}
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_pool, gfxstream_commandPool, commandPool);
|
||||
{
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkEnc->vkDestroyCommandPool(gfxstream_device->internal_object,
|
||||
gfxstream_commandPool->internal_object, pAllocator,
|
||||
true /* do lock */);
|
||||
}
|
||||
vk_command_pool_finish(&gfxstream_commandPool->vk);
|
||||
vk_free(&gfxstream_commandPool->vk.alloc, gfxstream_commandPool);
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_ResetCommandPool(VkDevice device, VkCommandPool commandPool,
|
||||
VkCommandPoolResetFlags flags) {
|
||||
AEMU_SCOPED_TRACE("vkResetCommandPool");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_pool, gfxstream_commandPool, commandPool);
|
||||
VkResult vkResetCommandPool_VkResult_return = (VkResult)0;
|
||||
{
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkResetCommandPool_VkResult_return = vkEnc->vkResetCommandPool(
|
||||
gfxstream_device->internal_object, gfxstream_commandPool->internal_object, flags,
|
||||
true /* do lock */);
|
||||
if (vkResetCommandPool_VkResult_return == VK_SUCCESS) {
|
||||
gfxstream::vk::ResourceTracker::get()->resetCommandPoolStagingInfo(
|
||||
gfxstream_commandPool->internal_object);
|
||||
}
|
||||
}
|
||||
return vkResetCommandPool_VkResult_return;
|
||||
}
|
||||
|
||||
static VkResult vk_command_buffer_createOp(struct vk_command_pool*, struct vk_command_buffer**);
|
||||
static void vk_command_buffer_resetOp(struct vk_command_buffer*, VkCommandBufferResetFlags);
|
||||
static void vk_command_buffer_destroyOp(struct vk_command_buffer*);
|
||||
|
||||
static vk_command_buffer_ops gfxstream_vk_commandBufferOps = {
|
||||
.create = vk_command_buffer_createOp,
|
||||
.reset = vk_command_buffer_resetOp,
|
||||
.destroy = vk_command_buffer_destroyOp};
|
||||
|
||||
VkResult vk_command_buffer_createOp(struct vk_command_pool* commandPool,
|
||||
struct vk_command_buffer** pCommandBuffer) {
|
||||
VkResult result = VK_SUCCESS;
|
||||
struct gfxstream_vk_command_buffer* gfxstream_commandBuffer =
|
||||
(struct gfxstream_vk_command_buffer*)vk_zalloc(&commandPool->alloc,
|
||||
sizeof(struct gfxstream_vk_command_buffer),
|
||||
8, VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
|
||||
if (gfxstream_commandBuffer) {
|
||||
result =
|
||||
vk_command_buffer_init(commandPool, &gfxstream_commandBuffer->vk,
|
||||
&gfxstream_vk_commandBufferOps, VK_COMMAND_BUFFER_LEVEL_PRIMARY);
|
||||
if (VK_SUCCESS == result) {
|
||||
*pCommandBuffer = &gfxstream_commandBuffer->vk;
|
||||
}
|
||||
} else {
|
||||
result = VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
void vk_command_buffer_resetOp(struct vk_command_buffer* commandBuffer,
|
||||
VkCommandBufferResetFlags flags) {
|
||||
(void)flags;
|
||||
vk_command_buffer_reset(commandBuffer);
|
||||
}
|
||||
|
||||
void vk_command_buffer_destroyOp(struct vk_command_buffer* commandBuffer) {
|
||||
vk_command_buffer_finish(commandBuffer);
|
||||
vk_free(&commandBuffer->pool->alloc, commandBuffer);
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_AllocateCommandBuffers(VkDevice device,
|
||||
const VkCommandBufferAllocateInfo* pAllocateInfo,
|
||||
VkCommandBuffer* pCommandBuffers) {
|
||||
AEMU_SCOPED_TRACE("vkAllocateCommandBuffers");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_pool, gfxstream_commandPool, pAllocateInfo->commandPool);
|
||||
VkResult result = (VkResult)0;
|
||||
std::vector<gfxstream_vk_command_buffer*> gfxstream_commandBuffers(
|
||||
pAllocateInfo->commandBufferCount);
|
||||
for (uint32_t i = 0; i < pAllocateInfo->commandBufferCount; i++) {
|
||||
result = vk_command_buffer_createOp(&gfxstream_commandPool->vk,
|
||||
(vk_command_buffer**)&gfxstream_commandBuffers[i]);
|
||||
if (VK_SUCCESS == result) {
|
||||
gfxstream_commandBuffers[i]->vk.level = pAllocateInfo->level;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (VK_SUCCESS == result) {
|
||||
// Create gfxstream-internal commandBuffer array
|
||||
std::vector<VkCommandBuffer> internal_objects(pAllocateInfo->commandBufferCount);
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
auto resources = gfxstream::vk::ResourceTracker::get();
|
||||
VkCommandBufferAllocateInfo internal_allocateInfo;
|
||||
internal_allocateInfo = *pAllocateInfo;
|
||||
internal_allocateInfo.commandPool = gfxstream_commandPool->internal_object;
|
||||
result = resources->on_vkAllocateCommandBuffers(
|
||||
vkEnc, VK_SUCCESS, gfxstream_device->internal_object, &internal_allocateInfo,
|
||||
internal_objects.data());
|
||||
if (result == VK_SUCCESS) {
|
||||
gfxstream::vk::ResourceTracker::get()->addToCommandPool(
|
||||
gfxstream_commandPool->internal_object, pAllocateInfo->commandBufferCount,
|
||||
internal_objects.data());
|
||||
for (uint32_t i = 0; i < (uint32_t)internal_objects.size(); i++) {
|
||||
gfxstream_commandBuffers[i]->internal_object = internal_objects[i];
|
||||
// TODO: Also vk_command_buffer_init() on every mesa command buffer?
|
||||
pCommandBuffers[i] =
|
||||
gfxstream_vk_command_buffer_to_handle(gfxstream_commandBuffers[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
void gfxstream_vk_FreeCommandBuffers(VkDevice device, VkCommandPool commandPool,
|
||||
uint32_t commandBufferCount,
|
||||
const VkCommandBuffer* pCommandBuffers) {
|
||||
AEMU_SCOPED_TRACE("vkFreeCommandBuffers");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_pool, gfxstream_commandPool, commandPool);
|
||||
{
|
||||
// Set up internal commandBuffer array for gfxstream-internal call
|
||||
std::vector<VkCommandBuffer> internal_objects(commandBufferCount);
|
||||
for (uint32_t i = 0; i < commandBufferCount; i++) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_buffer, gfxstream_commandBuffer,
|
||||
pCommandBuffers[i]);
|
||||
internal_objects[i] = gfxstream_commandBuffer->internal_object;
|
||||
}
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkEnc->vkFreeCommandBuffers(gfxstream_device->internal_object,
|
||||
gfxstream_commandPool->internal_object, commandBufferCount,
|
||||
internal_objects.data(), true /* do lock */);
|
||||
}
|
||||
for (uint32_t i = 0; i < commandBufferCount; i++) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_buffer, gfxstream_commandBuffer, pCommandBuffers[i]);
|
||||
vk_command_buffer_destroyOp(&gfxstream_commandBuffer->vk);
|
||||
}
|
||||
}
|
||||
840
src/gfxstream/guest/vulkan/gfxstream_vk_device.cpp
Normal file
840
src/gfxstream/guest/vulkan/gfxstream_vk_device.cpp
Normal file
|
|
@ -0,0 +1,840 @@
|
|||
// Copyright (C) 2023 The Android Open Source Project
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include <errno.h>
|
||||
#include <string.h>
|
||||
|
||||
#include "../vulkan_enc/vk_util.h"
|
||||
#include "HostConnection.h"
|
||||
#include "ProcessPipe.h"
|
||||
#include "ResourceTracker.h"
|
||||
#include "VkEncoder.h"
|
||||
#include "gfxstream_vk_entrypoints.h"
|
||||
#include "gfxstream_vk_private.h"
|
||||
#include "vk_alloc.h"
|
||||
#include "vk_device.h"
|
||||
#include "vk_instance.h"
|
||||
#include "vk_sync_dummy.h"
|
||||
|
||||
static HostConnection* getConnection(void) {
|
||||
auto hostCon = HostConnection::get();
|
||||
return hostCon;
|
||||
}
|
||||
|
||||
static gfxstream::vk::VkEncoder* getVkEncoder(HostConnection* con) { return con->vkEncoder(); }
|
||||
|
||||
gfxstream::vk::ResourceTracker::ThreadingCallbacks threadingCallbacks = {
|
||||
.hostConnectionGetFunc = getConnection,
|
||||
.vkEncoderGetFunc = getVkEncoder,
|
||||
};
|
||||
|
||||
VkResult SetupInstance(void) {
|
||||
uint32_t noRenderControlEnc = 0;
|
||||
HostConnection* hostCon = HostConnection::getOrCreate(kCapsetGfxStreamVulkan);
|
||||
if (!hostCon) {
|
||||
ALOGE("vulkan: Failed to get host connection\n");
|
||||
return VK_ERROR_DEVICE_LOST;
|
||||
}
|
||||
|
||||
gfxstream::vk::ResourceTracker::get()->setupCaps(noRenderControlEnc);
|
||||
// Legacy goldfish path: could be deleted once goldfish not used guest-side.
|
||||
if (!noRenderControlEnc) {
|
||||
// Implicitly sets up sequence number
|
||||
ExtendedRCEncoderContext* rcEnc = hostCon->rcEncoder();
|
||||
if (!rcEnc) {
|
||||
ALOGE("vulkan: Failed to get renderControl encoder context\n");
|
||||
return VK_ERROR_DEVICE_LOST;
|
||||
}
|
||||
|
||||
gfxstream::vk::ResourceTracker::get()->setupFeatures(rcEnc->featureInfo_const());
|
||||
}
|
||||
|
||||
gfxstream::vk::ResourceTracker::get()->setThreadingCallbacks(threadingCallbacks);
|
||||
gfxstream::vk::ResourceTracker::get()->setSeqnoPtr(getSeqnoPtrForProcess());
|
||||
gfxstream::vk::VkEncoder* vkEnc = hostCon->vkEncoder();
|
||||
if (!vkEnc) {
|
||||
ALOGE("vulkan: Failed to get Vulkan encoder\n");
|
||||
return VK_ERROR_DEVICE_LOST;
|
||||
}
|
||||
|
||||
return VK_SUCCESS;
|
||||
}
|
||||
|
||||
#define VK_HOST_CONNECTION(ret) \
|
||||
HostConnection* hostCon = HostConnection::getOrCreate(kCapsetGfxStreamVulkan); \
|
||||
gfxstream::vk::VkEncoder* vkEnc = hostCon->vkEncoder(); \
|
||||
if (!vkEnc) { \
|
||||
ALOGE("vulkan: Failed to get Vulkan encoder\n"); \
|
||||
return ret; \
|
||||
}
|
||||
|
||||
static bool instance_extension_table_initialized = false;
|
||||
static struct vk_instance_extension_table gfxstream_vk_instance_extensions_supported = {0};
|
||||
|
||||
// Provided by Mesa components only; never encoded/decoded through gfxstream
|
||||
static const char* const kMesaOnlyInstanceExtension[] = {
|
||||
VK_KHR_SURFACE_EXTENSION_NAME,
|
||||
#if defined(LINUX_GUEST_BUILD)
|
||||
VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME,
|
||||
#endif
|
||||
VK_EXT_DEBUG_UTILS_EXTENSION_NAME,
|
||||
};
|
||||
|
||||
static const char* const kMesaOnlyDeviceExtensions[] = {
|
||||
VK_KHR_SWAPCHAIN_EXTENSION_NAME,
|
||||
};
|
||||
|
||||
static bool isMesaOnlyInstanceExtension(const char* name) {
|
||||
for (auto mesaExt : kMesaOnlyInstanceExtension) {
|
||||
if (!strncmp(mesaExt, name, VK_MAX_EXTENSION_NAME_SIZE)) return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool isMesaOnlyDeviceExtension(const char* name) {
|
||||
for (auto mesaExt : kMesaOnlyDeviceExtensions) {
|
||||
if (!strncmp(mesaExt, name, VK_MAX_EXTENSION_NAME_SIZE)) return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
// Filtered extension names for encoding
|
||||
static std::vector<const char*> filteredInstanceExtensionNames(uint32_t count,
|
||||
const char* const* extNames) {
|
||||
std::vector<const char*> retList;
|
||||
for (uint32_t i = 0; i < count; ++i) {
|
||||
auto extName = extNames[i];
|
||||
if (!isMesaOnlyInstanceExtension(extName)) {
|
||||
retList.push_back(extName);
|
||||
}
|
||||
}
|
||||
return retList;
|
||||
}
|
||||
|
||||
static std::vector<const char*> filteredDeviceExtensionNames(uint32_t count,
|
||||
const char* const* extNames) {
|
||||
std::vector<const char*> retList;
|
||||
for (uint32_t i = 0; i < count; ++i) {
|
||||
auto extName = extNames[i];
|
||||
if (!isMesaOnlyDeviceExtension(extName)) {
|
||||
retList.push_back(extName);
|
||||
}
|
||||
}
|
||||
return retList;
|
||||
}
|
||||
|
||||
static void get_device_extensions(VkPhysicalDevice physDevInternal,
|
||||
struct vk_device_extension_table* deviceExts) {
|
||||
VkResult result = (VkResult)0;
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
auto resources = gfxstream::vk::ResourceTracker::get();
|
||||
uint32_t numDeviceExts = 0;
|
||||
result = resources->on_vkEnumerateDeviceExtensionProperties(vkEnc, VK_SUCCESS, physDevInternal,
|
||||
NULL, &numDeviceExts, NULL);
|
||||
if (VK_SUCCESS == result) {
|
||||
std::vector<VkExtensionProperties> extProps(numDeviceExts);
|
||||
result = resources->on_vkEnumerateDeviceExtensionProperties(
|
||||
vkEnc, VK_SUCCESS, physDevInternal, NULL, &numDeviceExts, extProps.data());
|
||||
if (VK_SUCCESS == result) {
|
||||
// device extensions from gfxstream
|
||||
for (uint32_t i = 0; i < numDeviceExts; i++) {
|
||||
for (uint32_t j = 0; j < VK_DEVICE_EXTENSION_COUNT; j++) {
|
||||
if (0 == strncmp(extProps[i].extensionName,
|
||||
vk_device_extensions[j].extensionName,
|
||||
VK_MAX_EXTENSION_NAME_SIZE)) {
|
||||
deviceExts->extensions[j] = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
// device extensions from Mesa
|
||||
for (uint32_t j = 0; j < VK_DEVICE_EXTENSION_COUNT; j++) {
|
||||
if (isMesaOnlyDeviceExtension(vk_device_extensions[j].extensionName)) {
|
||||
deviceExts->extensions[j] = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static VkResult gfxstream_vk_physical_device_init(
|
||||
struct gfxstream_vk_physical_device* physical_device, struct gfxstream_vk_instance* instance,
|
||||
VkPhysicalDevice internal_object) {
|
||||
struct vk_device_extension_table supported_extensions = {0};
|
||||
get_device_extensions(internal_object, &supported_extensions);
|
||||
|
||||
struct vk_physical_device_dispatch_table dispatch_table;
|
||||
memset(&dispatch_table, 0, sizeof(struct vk_physical_device_dispatch_table));
|
||||
vk_physical_device_dispatch_table_from_entrypoints(
|
||||
&dispatch_table, &gfxstream_vk_physical_device_entrypoints, false);
|
||||
vk_physical_device_dispatch_table_from_entrypoints(&dispatch_table,
|
||||
&wsi_physical_device_entrypoints, false);
|
||||
|
||||
// Initialize the mesa object
|
||||
VkResult result = vk_physical_device_init(&physical_device->vk, &instance->vk,
|
||||
&supported_extensions, NULL, NULL, &dispatch_table);
|
||||
|
||||
if (VK_SUCCESS == result) {
|
||||
// Set the gfxstream-internal object
|
||||
physical_device->internal_object = internal_object;
|
||||
physical_device->instance = instance;
|
||||
// Note: Must use dummy_sync for correct sync object path in WSI operations
|
||||
physical_device->sync_types[0] = &vk_sync_dummy_type;
|
||||
physical_device->sync_types[1] = NULL;
|
||||
physical_device->vk.supported_sync_types = physical_device->sync_types;
|
||||
|
||||
result = gfxstream_vk_wsi_init(physical_device);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static void gfxstream_vk_physical_device_finish(
|
||||
struct gfxstream_vk_physical_device* physical_device) {
|
||||
gfxstream_vk_wsi_finish(physical_device);
|
||||
|
||||
vk_physical_device_finish(&physical_device->vk);
|
||||
}
|
||||
|
||||
static void gfxstream_vk_destroy_physical_device(struct vk_physical_device* physical_device) {
|
||||
gfxstream_vk_physical_device_finish((struct gfxstream_vk_physical_device*)physical_device);
|
||||
vk_free(&physical_device->instance->alloc, physical_device);
|
||||
}
|
||||
|
||||
static VkResult gfxstream_vk_enumerate_devices(struct vk_instance* vk_instance) {
|
||||
VkResult result = VK_SUCCESS;
|
||||
gfxstream_vk_instance* gfxstream_instance = (gfxstream_vk_instance*)vk_instance;
|
||||
uint32_t deviceCount = 0;
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
auto resources = gfxstream::vk::ResourceTracker::get();
|
||||
result = resources->on_vkEnumeratePhysicalDevices(
|
||||
vkEnc, VK_SUCCESS, gfxstream_instance->internal_object, &deviceCount, NULL);
|
||||
if (VK_SUCCESS != result) return result;
|
||||
std::vector<VkPhysicalDevice> internal_list(deviceCount);
|
||||
result = resources->on_vkEnumeratePhysicalDevices(
|
||||
vkEnc, VK_SUCCESS, gfxstream_instance->internal_object, &deviceCount, internal_list.data());
|
||||
|
||||
if (VK_SUCCESS == result) {
|
||||
for (uint32_t i = 0; i < deviceCount; i++) {
|
||||
struct gfxstream_vk_physical_device* gfxstream_physicalDevice =
|
||||
(struct gfxstream_vk_physical_device*)vk_zalloc(
|
||||
&gfxstream_instance->vk.alloc, sizeof(struct gfxstream_vk_physical_device), 8,
|
||||
VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
|
||||
if (!gfxstream_physicalDevice) {
|
||||
result = VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
break;
|
||||
}
|
||||
result = gfxstream_vk_physical_device_init(gfxstream_physicalDevice, gfxstream_instance,
|
||||
internal_list[i]);
|
||||
if (VK_SUCCESS == result) {
|
||||
list_addtail(&gfxstream_physicalDevice->vk.link,
|
||||
&gfxstream_instance->vk.physical_devices.list);
|
||||
} else {
|
||||
vk_free(&gfxstream_instance->vk.alloc, gfxstream_physicalDevice);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static struct vk_instance_extension_table* get_instance_extensions() {
|
||||
struct vk_instance_extension_table* const retTablePtr =
|
||||
&gfxstream_vk_instance_extensions_supported;
|
||||
if (!instance_extension_table_initialized) {
|
||||
VkResult result = SetupInstance();
|
||||
if (VK_SUCCESS == result) {
|
||||
VK_HOST_CONNECTION(retTablePtr)
|
||||
auto resources = gfxstream::vk::ResourceTracker::get();
|
||||
uint32_t numInstanceExts = 0;
|
||||
result = resources->on_vkEnumerateInstanceExtensionProperties(vkEnc, VK_SUCCESS, NULL,
|
||||
&numInstanceExts, NULL);
|
||||
if (VK_SUCCESS == result) {
|
||||
std::vector<VkExtensionProperties> extProps(numInstanceExts);
|
||||
result = resources->on_vkEnumerateInstanceExtensionProperties(
|
||||
vkEnc, VK_SUCCESS, NULL, &numInstanceExts, extProps.data());
|
||||
if (VK_SUCCESS == result) {
|
||||
// instance extensions from gfxstream
|
||||
for (uint32_t i = 0; i < numInstanceExts; i++) {
|
||||
for (uint32_t j = 0; j < VK_INSTANCE_EXTENSION_COUNT; j++) {
|
||||
if (0 == strncmp(extProps[i].extensionName,
|
||||
vk_instance_extensions[j].extensionName,
|
||||
VK_MAX_EXTENSION_NAME_SIZE)) {
|
||||
gfxstream_vk_instance_extensions_supported.extensions[j] = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
// instance extensions from Mesa
|
||||
for (uint32_t j = 0; j < VK_INSTANCE_EXTENSION_COUNT; j++) {
|
||||
if (isMesaOnlyInstanceExtension(vk_instance_extensions[j].extensionName)) {
|
||||
gfxstream_vk_instance_extensions_supported.extensions[j] = true;
|
||||
}
|
||||
}
|
||||
instance_extension_table_initialized = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return retTablePtr;
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_CreateInstance(const VkInstanceCreateInfo* pCreateInfo,
|
||||
const VkAllocationCallbacks* pAllocator,
|
||||
VkInstance* pInstance) {
|
||||
AEMU_SCOPED_TRACE("vkCreateInstance");
|
||||
|
||||
struct gfxstream_vk_instance* instance;
|
||||
|
||||
pAllocator = pAllocator ?: vk_default_allocator();
|
||||
instance = (struct gfxstream_vk_instance*)vk_zalloc(pAllocator, sizeof(*instance), 8,
|
||||
VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
|
||||
if (NULL == instance) {
|
||||
return vk_error(NULL, VK_ERROR_OUT_OF_HOST_MEMORY);
|
||||
}
|
||||
|
||||
VkResult result = VK_SUCCESS;
|
||||
/* Encoder call */
|
||||
{
|
||||
ALOGE("calling setup instance internally");
|
||||
result = SetupInstance();
|
||||
if (VK_SUCCESS != result) {
|
||||
return vk_error(NULL, result);
|
||||
}
|
||||
uint32_t initialEnabledExtensionCount = pCreateInfo->enabledExtensionCount;
|
||||
const char* const* initialPpEnabledExtensionNames = pCreateInfo->ppEnabledExtensionNames;
|
||||
std::vector<const char*> filteredExts = filteredInstanceExtensionNames(
|
||||
pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
|
||||
// Temporarily modify createInfo for the encoder call
|
||||
VkInstanceCreateInfo* mutableCreateInfo = (VkInstanceCreateInfo*)pCreateInfo;
|
||||
mutableCreateInfo->enabledExtensionCount = static_cast<uint32_t>(filteredExts.size());
|
||||
mutableCreateInfo->ppEnabledExtensionNames = filteredExts.data();
|
||||
|
||||
VK_HOST_CONNECTION(VK_ERROR_DEVICE_LOST);
|
||||
result = vkEnc->vkCreateInstance(pCreateInfo, nullptr, &instance->internal_object,
|
||||
true /* do lock */);
|
||||
if (VK_SUCCESS != result) {
|
||||
return vk_error(NULL, result);
|
||||
}
|
||||
// Revert the createInfo the user-set data
|
||||
mutableCreateInfo->enabledExtensionCount = initialEnabledExtensionCount;
|
||||
mutableCreateInfo->ppEnabledExtensionNames = initialPpEnabledExtensionNames;
|
||||
}
|
||||
|
||||
struct vk_instance_dispatch_table dispatch_table;
|
||||
memset(&dispatch_table, 0, sizeof(struct vk_instance_dispatch_table));
|
||||
vk_instance_dispatch_table_from_entrypoints(&dispatch_table, &gfxstream_vk_instance_entrypoints,
|
||||
false);
|
||||
vk_instance_dispatch_table_from_entrypoints(&dispatch_table, &wsi_instance_entrypoints, false);
|
||||
|
||||
result = vk_instance_init(&instance->vk, get_instance_extensions(), &dispatch_table,
|
||||
pCreateInfo, pAllocator);
|
||||
|
||||
if (result != VK_SUCCESS) {
|
||||
vk_free(pAllocator, instance);
|
||||
return vk_error(NULL, result);
|
||||
}
|
||||
|
||||
instance->vk.physical_devices.enumerate = gfxstream_vk_enumerate_devices;
|
||||
instance->vk.physical_devices.destroy = gfxstream_vk_destroy_physical_device;
|
||||
// TODO: instance->vk.physical_devices.try_create_for_drm (?)
|
||||
|
||||
*pInstance = gfxstream_vk_instance_to_handle(instance);
|
||||
return VK_SUCCESS;
|
||||
}
|
||||
|
||||
void gfxstream_vk_DestroyInstance(VkInstance _instance, const VkAllocationCallbacks* pAllocator) {
|
||||
AEMU_SCOPED_TRACE("vkDestroyInstance");
|
||||
if (VK_NULL_HANDLE == _instance) return;
|
||||
|
||||
VK_FROM_HANDLE(gfxstream_vk_instance, instance, _instance);
|
||||
|
||||
VK_HOST_CONNECTION()
|
||||
vkEnc->vkDestroyInstance(instance->internal_object, pAllocator, true /* do lock */);
|
||||
|
||||
vk_instance_finish(&instance->vk);
|
||||
vk_free(&instance->vk.alloc, instance);
|
||||
|
||||
// To make End2EndTests happy, since now the host connection is statically linked to
|
||||
// libvulkan_ranchu.so [separate HostConnections now].
|
||||
#if defined(END2END_TESTS)
|
||||
hostCon->exit();
|
||||
processPipeRestart();
|
||||
#endif
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_EnumerateInstanceExtensionProperties(const char* pLayerName,
|
||||
uint32_t* pPropertyCount,
|
||||
VkExtensionProperties* pProperties) {
|
||||
AEMU_SCOPED_TRACE("vkvkEnumerateInstanceExtensionProperties");
|
||||
(void)pLayerName;
|
||||
|
||||
return vk_enumerate_instance_extension_properties(get_instance_extensions(), pPropertyCount,
|
||||
pProperties);
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_EnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice,
|
||||
const char* pLayerName,
|
||||
uint32_t* pPropertyCount,
|
||||
VkExtensionProperties* pProperties) {
|
||||
AEMU_SCOPED_TRACE("vkEnumerateDeviceExtensionProperties");
|
||||
(void)pLayerName;
|
||||
VK_FROM_HANDLE(vk_physical_device, pdevice, physicalDevice);
|
||||
|
||||
VK_OUTARRAY_MAKE_TYPED(VkExtensionProperties, out, pProperties, pPropertyCount);
|
||||
|
||||
for (int i = 0; i < VK_DEVICE_EXTENSION_COUNT; i++) {
|
||||
if (!pdevice->supported_extensions.extensions[i]) continue;
|
||||
|
||||
vk_outarray_append_typed(VkExtensionProperties, &out, prop) {
|
||||
*prop = vk_device_extensions[i];
|
||||
}
|
||||
}
|
||||
|
||||
return vk_outarray_status(&out);
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_CreateDevice(VkPhysicalDevice physicalDevice,
|
||||
const VkDeviceCreateInfo* pCreateInfo,
|
||||
const VkAllocationCallbacks* pAllocator, VkDevice* pDevice) {
|
||||
AEMU_SCOPED_TRACE("vkCreateDevice");
|
||||
VK_FROM_HANDLE(gfxstream_vk_physical_device, gfxstream_physicalDevice, physicalDevice);
|
||||
VkResult result = (VkResult)0;
|
||||
|
||||
/*
|
||||
* Android's libvulkan implements VkPhysicalDeviceSwapchainMaintenance1FeaturesEXT, but
|
||||
* passes it to the underlying driver anyways. See:
|
||||
*
|
||||
* https://android-review.googlesource.com/c/platform/hardware/google/gfxstream/+/2839438
|
||||
*
|
||||
* and associated bugs. Mesa VK runtime also checks this, so we have to filter out before
|
||||
* reaches it. vk_find_struct<VkPhysicalDeviceSwapchainMaintenance1FeaturesEXT>(..) doesn't
|
||||
* work for some reason.
|
||||
*/
|
||||
VkBaseInStructure* extensionCreateInfo = (VkBaseInStructure*)(pCreateInfo->pNext);
|
||||
while (extensionCreateInfo) {
|
||||
if (extensionCreateInfo->sType ==
|
||||
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SWAPCHAIN_MAINTENANCE_1_FEATURES_EXT) {
|
||||
auto swapchainMaintenance1Features =
|
||||
reinterpret_cast<VkPhysicalDeviceSwapchainMaintenance1FeaturesEXT*>(
|
||||
extensionCreateInfo);
|
||||
swapchainMaintenance1Features->swapchainMaintenance1 = VK_FALSE;
|
||||
}
|
||||
extensionCreateInfo = (VkBaseInStructure*)(extensionCreateInfo->pNext);
|
||||
}
|
||||
|
||||
const VkAllocationCallbacks* pMesaAllocator =
|
||||
pAllocator ?: &gfxstream_physicalDevice->instance->vk.alloc;
|
||||
struct gfxstream_vk_device* gfxstream_device = (struct gfxstream_vk_device*)vk_zalloc(
|
||||
pMesaAllocator, sizeof(struct gfxstream_vk_device), 8, VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
|
||||
result = gfxstream_device ? VK_SUCCESS : VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
if (VK_SUCCESS == result) {
|
||||
uint32_t initialEnabledExtensionCount = pCreateInfo->enabledExtensionCount;
|
||||
const char* const* initialPpEnabledExtensionNames = pCreateInfo->ppEnabledExtensionNames;
|
||||
std::vector<const char*> filteredExts = filteredDeviceExtensionNames(
|
||||
pCreateInfo->enabledExtensionCount, pCreateInfo->ppEnabledExtensionNames);
|
||||
// Temporarily modify createInfo for the encoder call
|
||||
VkDeviceCreateInfo* mutableCreateInfo = (VkDeviceCreateInfo*)pCreateInfo;
|
||||
mutableCreateInfo->enabledExtensionCount = static_cast<uint32_t>(filteredExts.size());
|
||||
mutableCreateInfo->ppEnabledExtensionNames = filteredExts.data();
|
||||
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
result = vkEnc->vkCreateDevice(gfxstream_physicalDevice->internal_object, pCreateInfo,
|
||||
pAllocator, &gfxstream_device->internal_object,
|
||||
true /* do lock */);
|
||||
// Revert the createInfo the user-set data
|
||||
mutableCreateInfo->enabledExtensionCount = initialEnabledExtensionCount;
|
||||
mutableCreateInfo->ppEnabledExtensionNames = initialPpEnabledExtensionNames;
|
||||
}
|
||||
if (VK_SUCCESS == result) {
|
||||
struct vk_device_dispatch_table dispatch_table;
|
||||
memset(&dispatch_table, 0, sizeof(struct vk_device_dispatch_table));
|
||||
vk_device_dispatch_table_from_entrypoints(&dispatch_table, &gfxstream_vk_device_entrypoints,
|
||||
false);
|
||||
vk_device_dispatch_table_from_entrypoints(&dispatch_table, &wsi_device_entrypoints, false);
|
||||
|
||||
result = vk_device_init(&gfxstream_device->vk, &gfxstream_physicalDevice->vk,
|
||||
&dispatch_table, pCreateInfo, pMesaAllocator);
|
||||
}
|
||||
if (VK_SUCCESS == result) {
|
||||
gfxstream_device->physical_device = gfxstream_physicalDevice;
|
||||
// TODO: Initialize cmd_dispatch for emulated secondary command buffer support?
|
||||
gfxstream_device->vk.command_dispatch_table = &gfxstream_device->cmd_dispatch;
|
||||
*pDevice = gfxstream_vk_device_to_handle(gfxstream_device);
|
||||
} else {
|
||||
vk_free(pMesaAllocator, gfxstream_device);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
void gfxstream_vk_DestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator) {
|
||||
AEMU_SCOPED_TRACE("vkDestroyDevice");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
if (VK_NULL_HANDLE == device) return;
|
||||
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkEnc->vkDestroyDevice(gfxstream_device->internal_object, pAllocator, true /* do lock */);
|
||||
|
||||
/* Must destroy device queues manually */
|
||||
vk_foreach_queue_safe(queue, &gfxstream_device->vk) {
|
||||
vk_queue_finish(queue);
|
||||
vk_free(&gfxstream_device->vk.alloc, queue);
|
||||
}
|
||||
vk_device_finish(&gfxstream_device->vk);
|
||||
vk_free(&gfxstream_device->vk.alloc, gfxstream_device);
|
||||
}
|
||||
|
||||
void gfxstream_vk_GetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex,
|
||||
VkQueue* pQueue) {
|
||||
AEMU_SCOPED_TRACE("vkGetDeviceQueue");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
struct gfxstream_vk_queue* gfxstream_queue = (struct gfxstream_vk_queue*)vk_zalloc(
|
||||
&gfxstream_device->vk.alloc, sizeof(struct gfxstream_vk_queue), 8,
|
||||
VK_SYSTEM_ALLOCATION_SCOPE_DEVICE);
|
||||
VkResult result = gfxstream_queue ? VK_SUCCESS : VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
if (VK_SUCCESS == result) {
|
||||
VkDeviceQueueCreateInfo createInfo = {
|
||||
.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO,
|
||||
.pNext = NULL,
|
||||
.flags = 0,
|
||||
.queueFamilyIndex = queueFamilyIndex,
|
||||
.queueCount = 1,
|
||||
.pQueuePriorities = NULL,
|
||||
};
|
||||
result =
|
||||
vk_queue_init(&gfxstream_queue->vk, &gfxstream_device->vk, &createInfo, queueIndex);
|
||||
}
|
||||
if (VK_SUCCESS == result) {
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkEnc->vkGetDeviceQueue(gfxstream_device->internal_object, queueFamilyIndex, queueIndex,
|
||||
&gfxstream_queue->internal_object, true /* do lock */);
|
||||
|
||||
gfxstream_queue->device = gfxstream_device;
|
||||
*pQueue = gfxstream_vk_queue_to_handle(gfxstream_queue);
|
||||
} else {
|
||||
*pQueue = VK_NULL_HANDLE;
|
||||
}
|
||||
}
|
||||
|
||||
void gfxstream_vk_GetDeviceQueue2(VkDevice device, const VkDeviceQueueInfo2* pQueueInfo,
|
||||
VkQueue* pQueue) {
|
||||
AEMU_SCOPED_TRACE("vkGetDeviceQueue2");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
struct gfxstream_vk_queue* gfxstream_queue = (struct gfxstream_vk_queue*)vk_zalloc(
|
||||
&gfxstream_device->vk.alloc, sizeof(struct gfxstream_vk_queue), 8,
|
||||
VK_SYSTEM_ALLOCATION_SCOPE_DEVICE);
|
||||
VkResult result = gfxstream_queue ? VK_SUCCESS : VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
if (VK_SUCCESS == result) {
|
||||
VkDeviceQueueCreateInfo createInfo = {
|
||||
.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO,
|
||||
.pNext = NULL,
|
||||
.flags = pQueueInfo->flags,
|
||||
.queueFamilyIndex = pQueueInfo->queueFamilyIndex,
|
||||
.queueCount = 1,
|
||||
.pQueuePriorities = NULL,
|
||||
};
|
||||
result = vk_queue_init(&gfxstream_queue->vk, &gfxstream_device->vk, &createInfo,
|
||||
pQueueInfo->queueIndex);
|
||||
}
|
||||
if (VK_SUCCESS == result) {
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkEnc->vkGetDeviceQueue2(gfxstream_device->internal_object, pQueueInfo,
|
||||
&gfxstream_queue->internal_object, true /* do lock */);
|
||||
|
||||
gfxstream_queue->device = gfxstream_device;
|
||||
*pQueue = gfxstream_vk_queue_to_handle(gfxstream_queue);
|
||||
} else {
|
||||
*pQueue = VK_NULL_HANDLE;
|
||||
}
|
||||
}
|
||||
|
||||
/* The loader wants us to expose a second GetInstanceProcAddr function
|
||||
* to work around certain LD_PRELOAD issues seen in apps.
|
||||
*/
|
||||
extern "C" PUBLIC VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
|
||||
vk_icdGetInstanceProcAddr(VkInstance instance, const char* pName);
|
||||
|
||||
extern "C" PUBLIC VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
|
||||
vk_icdGetInstanceProcAddr(VkInstance instance, const char* pName) {
|
||||
return gfxstream_vk_GetInstanceProcAddr(instance, pName);
|
||||
}
|
||||
|
||||
/* vk_icd.h does not declare this function, so we declare it here to
|
||||
* suppress Wmissing-prototypes.
|
||||
*/
|
||||
extern "C" PUBLIC VKAPI_ATTR VkResult VKAPI_CALL
|
||||
vk_icdNegotiateLoaderICDInterfaceVersion(uint32_t* pSupportedVersion);
|
||||
|
||||
extern "C" PUBLIC VKAPI_ATTR VkResult VKAPI_CALL
|
||||
vk_icdNegotiateLoaderICDInterfaceVersion(uint32_t* pSupportedVersion) {
|
||||
*pSupportedVersion = std::min(*pSupportedVersion, 3u);
|
||||
return VK_SUCCESS;
|
||||
}
|
||||
|
||||
/* With version 4+ of the loader interface the ICD should expose
|
||||
* vk_icdGetPhysicalDeviceProcAddr()
|
||||
*/
|
||||
extern "C" PUBLIC VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
|
||||
vk_icdGetPhysicalDeviceProcAddr(VkInstance _instance, const char* pName);
|
||||
|
||||
PFN_vkVoidFunction vk_icdGetPhysicalDeviceProcAddr(VkInstance _instance, const char* pName) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_instance, instance, _instance);
|
||||
|
||||
return vk_instance_get_physical_device_proc_addr(&instance->vk, pName);
|
||||
}
|
||||
|
||||
PFN_vkVoidFunction gfxstream_vk_GetInstanceProcAddr(VkInstance _instance, const char* pName) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_instance, instance, _instance);
|
||||
return vk_instance_get_proc_addr(&instance->vk, &gfxstream_vk_instance_entrypoints, pName);
|
||||
}
|
||||
|
||||
PFN_vkVoidFunction gfxstream_vk_GetDeviceProcAddr(VkDevice _device, const char* pName) {
|
||||
AEMU_SCOPED_TRACE("vkGetDeviceProcAddr");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, device, _device);
|
||||
return vk_device_get_proc_addr(&device->vk, pName);
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_AllocateMemory(VkDevice device, const VkMemoryAllocateInfo* pAllocateInfo,
|
||||
const VkAllocationCallbacks* pAllocator,
|
||||
VkDeviceMemory* pMemory) {
|
||||
AEMU_SCOPED_TRACE("vkAllocateMemory");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VkResult vkAllocateMemory_VkResult_return = (VkResult)0;
|
||||
struct gfxstream_vk_device_memory* gfxstream_pMemory =
|
||||
(struct gfxstream_vk_device_memory*)vk_device_memory_create(
|
||||
(vk_device*)gfxstream_device, pAllocateInfo, pAllocator,
|
||||
sizeof(struct gfxstream_vk_device_memory));
|
||||
/* VkMemoryDedicatedAllocateInfo */
|
||||
VkMemoryDedicatedAllocateInfo* dedicatedAllocInfoPtr =
|
||||
(VkMemoryDedicatedAllocateInfo*)vk_find_struct<VkMemoryDedicatedAllocateInfo>(
|
||||
pAllocateInfo);
|
||||
if (dedicatedAllocInfoPtr) {
|
||||
if (dedicatedAllocInfoPtr->buffer) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_buffer, gfxstream_buffer, dedicatedAllocInfoPtr->buffer);
|
||||
dedicatedAllocInfoPtr->buffer = gfxstream_buffer->internal_object;
|
||||
}
|
||||
if (dedicatedAllocInfoPtr->image) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_image, gfxstream_image, dedicatedAllocInfoPtr->image);
|
||||
dedicatedAllocInfoPtr->image = gfxstream_image->internal_object;
|
||||
}
|
||||
}
|
||||
vkAllocateMemory_VkResult_return = gfxstream_pMemory ? VK_SUCCESS : VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
if (VK_SUCCESS == vkAllocateMemory_VkResult_return) {
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
auto resources = gfxstream::vk::ResourceTracker::get();
|
||||
vkAllocateMemory_VkResult_return = resources->on_vkAllocateMemory(
|
||||
vkEnc, VK_SUCCESS, gfxstream_device->internal_object, pAllocateInfo, pAllocator,
|
||||
&gfxstream_pMemory->internal_object);
|
||||
}
|
||||
*pMemory = gfxstream_vk_device_memory_to_handle(gfxstream_pMemory);
|
||||
return vkAllocateMemory_VkResult_return;
|
||||
}
|
||||
|
||||
void gfxstream_vk_CmdBeginRenderPass(VkCommandBuffer commandBuffer,
|
||||
const VkRenderPassBeginInfo* pRenderPassBegin,
|
||||
VkSubpassContents contents) {
|
||||
AEMU_SCOPED_TRACE("vkCmdBeginRenderPass");
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_buffer, gfxstream_commandBuffer, commandBuffer);
|
||||
{
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getCommandBufferEncoder(
|
||||
gfxstream_commandBuffer->internal_object);
|
||||
VkRenderPassBeginInfo internal_pRenderPassBegin = vk_make_orphan_copy(*pRenderPassBegin);
|
||||
vk_struct_chain_iterator structChainIter =
|
||||
vk_make_chain_iterator(&internal_pRenderPassBegin);
|
||||
/* VkRenderPassBeginInfo::renderPass */
|
||||
VK_FROM_HANDLE(gfxstream_vk_render_pass, gfxstream_renderPass,
|
||||
internal_pRenderPassBegin.renderPass);
|
||||
internal_pRenderPassBegin.renderPass = gfxstream_renderPass->internal_object;
|
||||
/* VkRenderPassBeginInfo::framebuffer */
|
||||
VK_FROM_HANDLE(gfxstream_vk_framebuffer, gfxstream_framebuffer,
|
||||
internal_pRenderPassBegin.framebuffer);
|
||||
internal_pRenderPassBegin.framebuffer = gfxstream_framebuffer->internal_object;
|
||||
/* pNext = VkRenderPassAttachmentBeginInfo */
|
||||
std::vector<VkImageView> internal_pAttachments;
|
||||
VkRenderPassAttachmentBeginInfo internal_renderPassAttachmentBeginInfo;
|
||||
VkRenderPassAttachmentBeginInfo* pRenderPassAttachmentBeginInfo =
|
||||
(VkRenderPassAttachmentBeginInfo*)vk_find_struct<VkRenderPassAttachmentBeginInfo>(
|
||||
pRenderPassBegin);
|
||||
if (pRenderPassAttachmentBeginInfo) {
|
||||
internal_renderPassAttachmentBeginInfo = *pRenderPassAttachmentBeginInfo;
|
||||
/* VkRenderPassAttachmentBeginInfo::pAttachments */
|
||||
internal_pAttachments.reserve(internal_renderPassAttachmentBeginInfo.attachmentCount);
|
||||
for (uint32_t i = 0; i < internal_renderPassAttachmentBeginInfo.attachmentCount; i++) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_image_view, gfxstream_image_view,
|
||||
internal_renderPassAttachmentBeginInfo.pAttachments[i]);
|
||||
internal_pAttachments[i] = gfxstream_image_view->internal_object;
|
||||
}
|
||||
internal_renderPassAttachmentBeginInfo.pAttachments = internal_pAttachments.data();
|
||||
vk_append_struct(&structChainIter, &internal_renderPassAttachmentBeginInfo);
|
||||
}
|
||||
vkEnc->vkCmdBeginRenderPass(gfxstream_commandBuffer->internal_object,
|
||||
&internal_pRenderPassBegin, contents, true /* do lock */);
|
||||
}
|
||||
}
|
||||
|
||||
void gfxstream_vk_CmdBeginRenderPass2KHR(VkCommandBuffer commandBuffer,
|
||||
const VkRenderPassBeginInfo* pRenderPassBegin,
|
||||
const VkSubpassBeginInfo* pSubpassBeginInfo) {
|
||||
AEMU_SCOPED_TRACE("vkCmdBeginRenderPass2KHR");
|
||||
VK_FROM_HANDLE(gfxstream_vk_command_buffer, gfxstream_commandBuffer, commandBuffer);
|
||||
{
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getCommandBufferEncoder(
|
||||
gfxstream_commandBuffer->internal_object);
|
||||
VkRenderPassBeginInfo internal_pRenderPassBegin = vk_make_orphan_copy(*pRenderPassBegin);
|
||||
vk_struct_chain_iterator structChainIter =
|
||||
vk_make_chain_iterator(&internal_pRenderPassBegin);
|
||||
/* VkRenderPassBeginInfo::renderPass */
|
||||
VK_FROM_HANDLE(gfxstream_vk_render_pass, gfxstream_renderPass,
|
||||
internal_pRenderPassBegin.renderPass);
|
||||
internal_pRenderPassBegin.renderPass = gfxstream_renderPass->internal_object;
|
||||
/* VkRenderPassBeginInfo::framebuffer */
|
||||
VK_FROM_HANDLE(gfxstream_vk_framebuffer, gfxstream_framebuffer,
|
||||
internal_pRenderPassBegin.framebuffer);
|
||||
internal_pRenderPassBegin.framebuffer = gfxstream_framebuffer->internal_object;
|
||||
/* pNext = VkRenderPassAttachmentBeginInfo */
|
||||
std::vector<VkImageView> internal_pAttachments;
|
||||
VkRenderPassAttachmentBeginInfo internal_renderPassAttachmentBeginInfo;
|
||||
VkRenderPassAttachmentBeginInfo* pRenderPassAttachmentBeginInfo =
|
||||
(VkRenderPassAttachmentBeginInfo*)vk_find_struct<VkRenderPassAttachmentBeginInfo>(
|
||||
pRenderPassBegin);
|
||||
if (pRenderPassAttachmentBeginInfo) {
|
||||
internal_renderPassAttachmentBeginInfo = *pRenderPassAttachmentBeginInfo;
|
||||
/* VkRenderPassAttachmentBeginInfo::pAttachments */
|
||||
internal_pAttachments.reserve(internal_renderPassAttachmentBeginInfo.attachmentCount);
|
||||
for (uint32_t i = 0; i < internal_renderPassAttachmentBeginInfo.attachmentCount; i++) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_image_view, gfxstream_image_view,
|
||||
internal_renderPassAttachmentBeginInfo.pAttachments[i]);
|
||||
internal_pAttachments[i] = gfxstream_image_view->internal_object;
|
||||
}
|
||||
internal_renderPassAttachmentBeginInfo.pAttachments = internal_pAttachments.data();
|
||||
vk_append_struct(&structChainIter, &internal_renderPassAttachmentBeginInfo);
|
||||
}
|
||||
vkEnc->vkCmdBeginRenderPass2KHR(gfxstream_commandBuffer->internal_object,
|
||||
&internal_pRenderPassBegin, pSubpassBeginInfo,
|
||||
true /* do lock */);
|
||||
}
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_GetMemoryFdKHR(VkDevice device, const VkMemoryGetFdInfoKHR* pGetFdInfo,
|
||||
int* pFd) {
|
||||
AEMU_SCOPED_TRACE("vkGetMemoryFdKHR");
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VkResult vkGetMemoryFdKHR_VkResult_return = (VkResult)0;
|
||||
|
||||
{
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
std::vector<VkMemoryGetFdInfoKHR> internal_pGetFdInfo(1);
|
||||
for (uint32_t i = 0; i < 1; ++i) {
|
||||
internal_pGetFdInfo[i] = pGetFdInfo[i];
|
||||
/* VkMemoryGetFdInfoKHR::memory */
|
||||
VK_FROM_HANDLE(gfxstream_vk_device_memory, gfxstream_memory,
|
||||
internal_pGetFdInfo[i].memory);
|
||||
internal_pGetFdInfo[i].memory = gfxstream_memory->internal_object;
|
||||
}
|
||||
auto resources = gfxstream::vk::ResourceTracker::get();
|
||||
vkGetMemoryFdKHR_VkResult_return = resources->on_vkGetMemoryFdKHR(
|
||||
vkEnc, VK_SUCCESS, gfxstream_device->internal_object, internal_pGetFdInfo.data(), pFd);
|
||||
}
|
||||
return vkGetMemoryFdKHR_VkResult_return;
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_EnumerateInstanceLayerProperties(uint32_t* pPropertyCount,
|
||||
VkLayerProperties* pProperties) {
|
||||
AEMU_SCOPED_TRACE("vkEnumerateInstanceLayerProperties");
|
||||
auto result = SetupInstance();
|
||||
if (VK_SUCCESS != result) {
|
||||
return vk_error(NULL, result);
|
||||
}
|
||||
|
||||
VkResult vkEnumerateInstanceLayerProperties_VkResult_return = (VkResult)0;
|
||||
{
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkEnumerateInstanceLayerProperties_VkResult_return =
|
||||
vkEnc->vkEnumerateInstanceLayerProperties(pPropertyCount, pProperties,
|
||||
true /* do lock */);
|
||||
}
|
||||
return vkEnumerateInstanceLayerProperties_VkResult_return;
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_EnumerateInstanceVersion(uint32_t* pApiVersion) {
|
||||
AEMU_SCOPED_TRACE("vkEnumerateInstanceVersion");
|
||||
auto result = SetupInstance();
|
||||
if (VK_SUCCESS != result) {
|
||||
return vk_error(NULL, result);
|
||||
}
|
||||
|
||||
VkResult vkEnumerateInstanceVersion_VkResult_return = (VkResult)0;
|
||||
{
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
vkEnumerateInstanceVersion_VkResult_return =
|
||||
vkEnc->vkEnumerateInstanceVersion(pApiVersion, true /* do lock */);
|
||||
}
|
||||
return vkEnumerateInstanceVersion_VkResult_return;
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_CreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache,
|
||||
uint32_t createInfoCount,
|
||||
const VkComputePipelineCreateInfo* pCreateInfos,
|
||||
const VkAllocationCallbacks* pAllocator,
|
||||
VkPipeline* pPipelines) {
|
||||
AEMU_SCOPED_TRACE("vkCreateComputePipelines");
|
||||
VkResult vkCreateComputePipelines_VkResult_return = (VkResult)0;
|
||||
VK_FROM_HANDLE(gfxstream_vk_device, gfxstream_device, device);
|
||||
VK_FROM_HANDLE(gfxstream_vk_pipeline_cache, gfxstream_pipelineCache, pipelineCache);
|
||||
struct gfxstream_vk_pipeline* gfxstream_pPipelines = (gfxstream_vk_pipeline*)vk_object_zalloc(
|
||||
&gfxstream_device->vk, pAllocator, sizeof(gfxstream_vk_pipeline), VK_OBJECT_TYPE_PIPELINE);
|
||||
vkCreateComputePipelines_VkResult_return =
|
||||
gfxstream_pPipelines ? VK_SUCCESS : VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
if (VK_SUCCESS == vkCreateComputePipelines_VkResult_return) {
|
||||
auto vkEnc = gfxstream::vk::ResourceTracker::getThreadLocalEncoder();
|
||||
std::vector<VkComputePipelineCreateInfo> internal_pCreateInfos(createInfoCount);
|
||||
std::vector<VkPipelineShaderStageCreateInfo> internal_VkComputePipelineCreateInfo_stage(createInfoCount);
|
||||
for (uint32_t i = 0; i < createInfoCount; ++i) {
|
||||
internal_pCreateInfos[i] = pCreateInfos[i];
|
||||
/* VkComputePipelineCreateInfo::stage */
|
||||
{
|
||||
internal_VkComputePipelineCreateInfo_stage[i] = internal_pCreateInfos[i].stage;
|
||||
/* VkPipelineShaderStageCreateInfo::module */
|
||||
if (internal_VkComputePipelineCreateInfo_stage[i].module) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_shader_module, gfxstream_module,
|
||||
internal_VkComputePipelineCreateInfo_stage[i].module);
|
||||
internal_VkComputePipelineCreateInfo_stage[i].module =
|
||||
gfxstream_module->internal_object;
|
||||
}
|
||||
internal_pCreateInfos[i].stage = internal_VkComputePipelineCreateInfo_stage[i];
|
||||
}
|
||||
/* VkComputePipelineCreateInfo::layout */
|
||||
VK_FROM_HANDLE(gfxstream_vk_pipeline_layout, gfxstream_layout,
|
||||
internal_pCreateInfos[i].layout);
|
||||
internal_pCreateInfos[i].layout = gfxstream_layout->internal_object;
|
||||
/* VkComputePipelineCreateInfo::basePipelineHandle */
|
||||
if (internal_pCreateInfos[i].basePipelineHandle) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_pipeline, gfxstream_basePipelineHandle,
|
||||
internal_pCreateInfos[i].basePipelineHandle);
|
||||
internal_pCreateInfos[i].basePipelineHandle =
|
||||
gfxstream_basePipelineHandle->internal_object;
|
||||
}
|
||||
}
|
||||
vkCreateComputePipelines_VkResult_return = vkEnc->vkCreateComputePipelines(
|
||||
gfxstream_device->internal_object,
|
||||
gfxstream_pipelineCache ? gfxstream_pipelineCache->internal_object : VK_NULL_HANDLE,
|
||||
createInfoCount, internal_pCreateInfos.data(), pAllocator,
|
||||
&gfxstream_pPipelines->internal_object, true /* do lock */);
|
||||
}
|
||||
*pPipelines = gfxstream_vk_pipeline_to_handle(gfxstream_pPipelines);
|
||||
return vkCreateComputePipelines_VkResult_return;
|
||||
}
|
||||
116
src/gfxstream/guest/vulkan/gfxstream_vk_fuchsia.cpp
Normal file
116
src/gfxstream/guest/vulkan/gfxstream_vk_fuchsia.cpp
Normal file
|
|
@ -0,0 +1,116 @@
|
|||
/*
|
||||
* Copyright 2023 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
#include <fidl/fuchsia.logger/cpp/wire.h>
|
||||
#include <lib/syslog/global.h>
|
||||
#include <lib/zx/channel.h>
|
||||
#include <lib/zx/socket.h>
|
||||
#include <lib/zxio/zxio.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include "TraceProviderFuchsia.h"
|
||||
#include "services/service_connector.h"
|
||||
|
||||
class VulkanDevice {
|
||||
public:
|
||||
VulkanDevice() : mHostSupportsGoldfish(IsAccessible(QEMU_PIPE_PATH)) {
|
||||
InitLogger();
|
||||
InitTraceProvider();
|
||||
gfxstream::vk::ResourceTracker::get();
|
||||
}
|
||||
|
||||
static void InitLogger();
|
||||
|
||||
static bool IsAccessible(const char* name) {
|
||||
zx_handle_t handle = GetConnectToServiceFunction()(name);
|
||||
if (handle == ZX_HANDLE_INVALID) return false;
|
||||
|
||||
zxio_storage_t io_storage;
|
||||
zx_status_t status = zxio_create(handle, &io_storage);
|
||||
if (status != ZX_OK) return false;
|
||||
|
||||
status = zxio_close(&io_storage.io, /*should_wait=*/true);
|
||||
if (status != ZX_OK) return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static VulkanDevice& GetInstance() {
|
||||
static VulkanDevice g_instance;
|
||||
return g_instance;
|
||||
}
|
||||
|
||||
PFN_vkVoidFunction GetInstanceProcAddr(VkInstance instance, const char* name) {
|
||||
return ::GetInstanceProcAddr(instance, name);
|
||||
}
|
||||
|
||||
private:
|
||||
void InitTraceProvider();
|
||||
|
||||
TraceProviderFuchsia mTraceProvider;
|
||||
const bool mHostSupportsGoldfish;
|
||||
};
|
||||
|
||||
void VulkanDevice::InitLogger() {
|
||||
auto log_socket = ([]() -> std::optional<zx::socket> {
|
||||
fidl::ClientEnd<fuchsia_logger::LogSink> channel{
|
||||
zx::channel{GetConnectToServiceFunction()("/svc/fuchsia.logger.LogSink")}};
|
||||
if (!channel.is_valid()) return std::nullopt;
|
||||
|
||||
zx::socket local_socket, remote_socket;
|
||||
zx_status_t status = zx::socket::create(ZX_SOCKET_DATAGRAM, &local_socket, &remote_socket);
|
||||
if (status != ZX_OK) return std::nullopt;
|
||||
|
||||
auto result = fidl::WireCall(channel)->Connect(std::move(remote_socket));
|
||||
|
||||
if (!result.ok()) return std::nullopt;
|
||||
|
||||
return local_socket;
|
||||
})();
|
||||
if (!log_socket) return;
|
||||
|
||||
fx_logger_config_t config = {
|
||||
.min_severity = FX_LOG_INFO,
|
||||
.log_sink_socket = log_socket->release(),
|
||||
.tags = nullptr,
|
||||
.num_tags = 0,
|
||||
};
|
||||
|
||||
fx_log_reconfigure(&config);
|
||||
}
|
||||
|
||||
void VulkanDevice::InitTraceProvider() {
|
||||
if (!mTraceProvider.Initialize()) {
|
||||
ALOGE("Trace provider failed to initialize");
|
||||
}
|
||||
}
|
||||
|
||||
typedef VkResult(VKAPI_PTR* PFN_vkOpenInNamespaceAddr)(const char* pName, uint32_t handle);
|
||||
|
||||
namespace {
|
||||
|
||||
PFN_vkOpenInNamespaceAddr g_vulkan_connector;
|
||||
|
||||
zx_handle_t LocalConnectToServiceFunction(const char* pName) {
|
||||
zx::channel remote_endpoint, local_endpoint;
|
||||
zx_status_t status;
|
||||
if ((status = zx::channel::create(0, &remote_endpoint, &local_endpoint)) != ZX_OK) {
|
||||
ALOGE("zx::channel::create failed: %d", status);
|
||||
return ZX_HANDLE_INVALID;
|
||||
}
|
||||
if ((status = g_vulkan_connector(pName, remote_endpoint.release())) != ZX_OK) {
|
||||
ALOGE("vulkan_connector failed: %d", status);
|
||||
return ZX_HANDLE_INVALID;
|
||||
}
|
||||
return local_endpoint.release();
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
extern "C" __attribute__((visibility("default"))) void vk_icdInitializeOpenInNamespaceCallback(
|
||||
PFN_vkOpenInNamespaceAddr callback) {
|
||||
g_vulkan_connector = callback;
|
||||
SetConnectToServiceFunction(&LocalConnectToServiceFunction);
|
||||
}
|
||||
47
src/gfxstream/guest/vulkan/gfxstream_vk_wsi.cpp
Normal file
47
src/gfxstream/guest/vulkan/gfxstream_vk_wsi.cpp
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
// Copyright (C) 2023 The Android Open Source Project
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "gfxstream_vk_entrypoints.h"
|
||||
#include "gfxstream_vk_private.h"
|
||||
#include "wsi_common.h"
|
||||
|
||||
static VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL
|
||||
gfxstream_vk_wsi_proc_addr(VkPhysicalDevice physicalDevice, const char* pName) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_physical_device, pdevice, physicalDevice);
|
||||
return vk_instance_get_proc_addr_unchecked(&pdevice->instance->vk, pName);
|
||||
}
|
||||
|
||||
VkResult gfxstream_vk_wsi_init(struct gfxstream_vk_physical_device* physical_device) {
|
||||
VkResult result = (VkResult)0;
|
||||
|
||||
const struct wsi_device_options options = {.sw_device = false};
|
||||
result = wsi_device_init(
|
||||
&physical_device->wsi_device, gfxstream_vk_physical_device_to_handle(physical_device),
|
||||
gfxstream_vk_wsi_proc_addr, &physical_device->instance->vk.alloc, -1, NULL, &options);
|
||||
if (result != VK_SUCCESS) return result;
|
||||
|
||||
// Allow guest-side modifier code paths
|
||||
physical_device->wsi_device.supports_modifiers = true;
|
||||
// For DRM, uses the buffer-blit path for WSI images
|
||||
physical_device->wsi_device.supports_scanout = false;
|
||||
|
||||
physical_device->vk.wsi_device = &physical_device->wsi_device;
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
void gfxstream_vk_wsi_finish(struct gfxstream_vk_physical_device* physical_device) {
|
||||
physical_device->vk.wsi_device = NULL;
|
||||
wsi_device_finish(&physical_device->wsi_device, &physical_device->instance->vk.alloc);
|
||||
}
|
||||
|
|
@ -1,23 +1,28 @@
|
|||
# Copyright 2022 Android Open Source Project
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
vk_api_xml = files('vk.xml')
|
||||
vk_icd_gen = files('vk_icd_gen.py')
|
||||
|
||||
files_lib_vulkan_gfxstream = files(
|
||||
'goldfish_vulkan.cpp',
|
||||
'gfxstream_vk_device.cpp',
|
||||
'gfxstream_vk_cmd.cpp',
|
||||
'gfxstream_vk_wsi.cpp'
|
||||
)
|
||||
|
||||
lib_vulkan_gfxstream = shared_library(
|
||||
'vulkan_gfxstream',
|
||||
files_lib_vulkan_gfxstream,
|
||||
files_lib_vulkan_gfxstream + files_lib_vulkan_enc + gfxstream_vk_entrypoints,
|
||||
cpp_args: cpp_args,
|
||||
include_directories: [inc_vulkan_headers, inc_opengl_headers, inc_android_emu,
|
||||
inc_android_compat, inc_opengl_system, inc_guest_iostream,
|
||||
inc_opengl_codec, inc_render_enc, inc_vulkan_enc, inc_platform,
|
||||
inc_goldfish_address_space, inc_system, inc_codec_common],
|
||||
inc_goldfish_address_space, inc_system, inc_include, inc_src,
|
||||
inc_platform, inc_codec_common],
|
||||
link_with: [lib_android_compat, lib_emu_android_base, lib_stream,
|
||||
lib_vulkan_enc, libvulkan_wsi],
|
||||
libvulkan_wsi, lib_platform],
|
||||
link_args: [vulkan_icd_link_args, ld_args_bsymbolic, ld_args_gc_sections],
|
||||
link_depends: vulkan_icd_link_depends,
|
||||
dependencies: [dependency('libdrm'), idep_vulkan_wsi_headers,
|
||||
idep_vulkan_runtime_headers, idep_vulkan_runtime,
|
||||
idep_vulkan_util_headers, idep_vulkan_wsi],
|
||||
install: true,
|
||||
)
|
||||
|
||||
|
|
@ -27,7 +32,7 @@ gfxstream_icd = custom_target(
|
|||
output : 'gfxstream_icd.@0@.json'.format(host_machine.cpu()),
|
||||
command : [
|
||||
prog_python, '@INPUT0@',
|
||||
'--api-version', '1.0', '--xml', '@INPUT1@',
|
||||
'--api-version', '1.1', '--xml', '@INPUT1@',
|
||||
'--lib-path', join_paths(get_option('prefix'), get_option('libdir'),
|
||||
'libvulkan_gfxstream.so'),
|
||||
'--out', '@OUTPUT@',
|
||||
|
|
|
|||
|
|
@ -21,11 +21,11 @@
|
|||
#endif
|
||||
#endif
|
||||
|
||||
#include "../OpenglSystemCommon/HostConnection.h"
|
||||
#include <assert.h>
|
||||
|
||||
#include "../OpenglSystemCommon/HostConnection.h"
|
||||
#include "vk_format_info.h"
|
||||
#include "vk_util.h"
|
||||
#include <assert.h>
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
|
@ -34,36 +34,28 @@ namespace vk {
|
|||
/* Construct ahw usage mask from image usage bits, see
|
||||
* 'AHardwareBuffer Usage Equivalence' in Vulkan spec.
|
||||
*/
|
||||
uint64_t
|
||||
getAndroidHardwareBufferUsageFromVkUsage(const VkImageCreateFlags vk_create,
|
||||
const VkImageUsageFlags vk_usage)
|
||||
{
|
||||
uint64_t ahw_usage = 0;
|
||||
uint64_t getAndroidHardwareBufferUsageFromVkUsage(const VkImageCreateFlags vk_create,
|
||||
const VkImageUsageFlags vk_usage) {
|
||||
uint64_t ahw_usage = 0;
|
||||
|
||||
if (vk_usage & VK_IMAGE_USAGE_SAMPLED_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE;
|
||||
if (vk_usage & VK_IMAGE_USAGE_SAMPLED_BIT) ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE;
|
||||
|
||||
if (vk_usage & VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE;
|
||||
if (vk_usage & VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE;
|
||||
|
||||
if (vk_usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_COLOR_OUTPUT;
|
||||
if (vk_usage & VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_COLOR_OUTPUT;
|
||||
|
||||
if (vk_create & VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_CUBE_MAP;
|
||||
if (vk_create & VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_GPU_CUBE_MAP;
|
||||
|
||||
if (vk_create & VK_IMAGE_CREATE_PROTECTED_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_PROTECTED_CONTENT;
|
||||
if (vk_create & VK_IMAGE_CREATE_PROTECTED_BIT)
|
||||
ahw_usage |= AHARDWAREBUFFER_USAGE_PROTECTED_CONTENT;
|
||||
|
||||
/* No usage bits set - set at least one GPU usage. */
|
||||
if (ahw_usage == 0)
|
||||
ahw_usage = AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE;
|
||||
/* No usage bits set - set at least one GPU usage. */
|
||||
if (ahw_usage == 0) ahw_usage = AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE;
|
||||
|
||||
return ahw_usage;
|
||||
}
|
||||
|
||||
void updateMemoryTypeBits(uint32_t* memoryTypeBits, uint32_t colorBufferMemoryIndex) {
|
||||
*memoryTypeBits = 1u << colorBufferMemoryIndex;
|
||||
return ahw_usage;
|
||||
}
|
||||
|
||||
VkResult getAndroidHardwareBufferPropertiesANDROID(
|
||||
|
|
@ -74,45 +66,45 @@ VkResult getAndroidHardwareBufferPropertiesANDROID(
|
|||
|
||||
const auto format = grallocHelper->getFormat(buffer);
|
||||
if (ahbFormatProps) {
|
||||
switch(format) {
|
||||
switch (format) {
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM:
|
||||
ahbFormatProps->format = VK_FORMAT_R8G8B8A8_UNORM;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_R8G8B8A8_UNORM;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8X8_UNORM:
|
||||
ahbFormatProps->format = VK_FORMAT_R8G8B8A8_UNORM;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_R8G8B8A8_UNORM;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM:
|
||||
ahbFormatProps->format = VK_FORMAT_R8G8B8_UNORM;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_R8G8B8_UNORM;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM:
|
||||
ahbFormatProps->format = VK_FORMAT_R5G6B5_UNORM_PACK16;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_R5G6B5_UNORM_PACK16;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT:
|
||||
ahbFormatProps->format = VK_FORMAT_R16G16B16A16_SFLOAT;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_R16G16B16A16_SFLOAT;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_R10G10B10A2_UNORM:
|
||||
ahbFormatProps->format = VK_FORMAT_A2B10G10R10_UNORM_PACK32;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_A2B10G10R10_UNORM_PACK32;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_D16_UNORM:
|
||||
ahbFormatProps->format = VK_FORMAT_D16_UNORM;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_D16_UNORM;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_D24_UNORM:
|
||||
ahbFormatProps->format = VK_FORMAT_X8_D24_UNORM_PACK32;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_X8_D24_UNORM_PACK32;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_D24_UNORM_S8_UINT:
|
||||
ahbFormatProps->format = VK_FORMAT_D24_UNORM_S8_UINT;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_D24_UNORM_S8_UINT;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_D32_FLOAT:
|
||||
ahbFormatProps->format = VK_FORMAT_D32_SFLOAT;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_D32_SFLOAT;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_D32_FLOAT_S8_UINT:
|
||||
ahbFormatProps->format = VK_FORMAT_D32_SFLOAT_S8_UINT;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_D32_SFLOAT_S8_UINT;
|
||||
break;
|
||||
case AHARDWAREBUFFER_FORMAT_S8_UINT:
|
||||
ahbFormatProps->format = VK_FORMAT_S8_UINT;
|
||||
break;
|
||||
ahbFormatProps->format = VK_FORMAT_S8_UINT;
|
||||
break;
|
||||
default:
|
||||
ahbFormatProps->format = VK_FORMAT_UNDEFINED;
|
||||
ahbFormatProps->format = VK_FORMAT_UNDEFINED;
|
||||
}
|
||||
ahbFormatProps->externalFormat = format;
|
||||
|
||||
|
|
@ -128,10 +120,8 @@ VkResult getAndroidHardwareBufferPropertiesANDROID(
|
|||
// VK_FORMAT_FEATURE_TRANSFER_DST_BIT
|
||||
// VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT
|
||||
ahbFormatProps->formatFeatures =
|
||||
VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT |
|
||||
VK_FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT |
|
||||
VK_FORMAT_FEATURE_TRANSFER_SRC_BIT |
|
||||
VK_FORMAT_FEATURE_TRANSFER_DST_BIT |
|
||||
VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT | VK_FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT |
|
||||
VK_FORMAT_FEATURE_TRANSFER_SRC_BIT | VK_FORMAT_FEATURE_TRANSFER_DST_BIT |
|
||||
VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT;
|
||||
|
||||
// "Implementations may not always be able to determine the color model,
|
||||
|
|
@ -166,7 +156,8 @@ VkResult getAndroidHardwareBufferPropertiesANDROID(
|
|||
// * U (CB) comes from the B-channel (after swizzle)
|
||||
// * V (CR) comes from the R-channel (after swizzle)
|
||||
//
|
||||
// See https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#textures-sampler-YCbCr-conversion
|
||||
// See
|
||||
// https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#textures-sampler-YCbCr-conversion
|
||||
//
|
||||
// To match the above, the guest needs to swizzle such that:
|
||||
//
|
||||
|
|
@ -204,10 +195,9 @@ VkResult getAndroidHardwareBufferPropertiesANDROID(
|
|||
#endif
|
||||
#endif
|
||||
|
||||
ahbFormatProps->suggestedYcbcrModel =
|
||||
android_format_is_yuv(format) ?
|
||||
VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601 :
|
||||
VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY;
|
||||
ahbFormatProps->suggestedYcbcrModel = android_format_is_yuv(format)
|
||||
? VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601
|
||||
: VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY;
|
||||
ahbFormatProps->suggestedYcbcrRange = VK_SAMPLER_YCBCR_RANGE_ITU_FULL;
|
||||
|
||||
ahbFormatProps->suggestedXChromaOffset = VK_CHROMA_LOCATION_MIDPOINT;
|
||||
|
|
@ -281,26 +271,24 @@ VkResult createAndroidHardwareBuffer(gfxstream::Gralloc* gralloc, bool hasDedica
|
|||
|
||||
/* If caller passed dedicated information. */
|
||||
if (hasDedicatedImage) {
|
||||
w = imageExtent.width;
|
||||
h = imageExtent.height;
|
||||
layers = imageLayers;
|
||||
format = android_format_from_vk(imageFormat);
|
||||
usage = getAndroidHardwareBufferUsageFromVkUsage(imageCreateFlags, imageUsage);
|
||||
w = imageExtent.width;
|
||||
h = imageExtent.height;
|
||||
layers = imageLayers;
|
||||
format = android_format_from_vk(imageFormat);
|
||||
usage = getAndroidHardwareBufferUsageFromVkUsage(imageCreateFlags, imageUsage);
|
||||
} else if (hasDedicatedBuffer) {
|
||||
w = bufferSize;
|
||||
format = AHARDWAREBUFFER_FORMAT_BLOB;
|
||||
usage = AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN |
|
||||
AHARDWAREBUFFER_USAGE_CPU_WRITE_OFTEN |
|
||||
AHARDWAREBUFFER_USAGE_GPU_DATA_BUFFER;
|
||||
w = bufferSize;
|
||||
format = AHARDWAREBUFFER_FORMAT_BLOB;
|
||||
usage = AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN | AHARDWAREBUFFER_USAGE_CPU_WRITE_OFTEN |
|
||||
AHARDWAREBUFFER_USAGE_GPU_DATA_BUFFER;
|
||||
} else {
|
||||
w = allocationInfoAllocSize;
|
||||
format = AHARDWAREBUFFER_FORMAT_BLOB;
|
||||
usage = AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN |
|
||||
AHARDWAREBUFFER_USAGE_CPU_WRITE_OFTEN |
|
||||
AHARDWAREBUFFER_USAGE_GPU_DATA_BUFFER;
|
||||
w = allocationInfoAllocSize;
|
||||
format = AHARDWAREBUFFER_FORMAT_BLOB;
|
||||
usage = AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN | AHARDWAREBUFFER_USAGE_CPU_WRITE_OFTEN |
|
||||
AHARDWAREBUFFER_USAGE_GPU_DATA_BUFFER;
|
||||
}
|
||||
|
||||
struct AHardwareBuffer *ahb = NULL;
|
||||
struct AHardwareBuffer* ahb = NULL;
|
||||
|
||||
if (gralloc->allocate(w, h, format, usage, &ahb) != 0) {
|
||||
return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
|
|
|
|||
|
|
@ -25,10 +25,8 @@
|
|||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
||||
uint64_t
|
||||
getAndroidHardwareBufferUsageFromVkUsage(
|
||||
const VkImageCreateFlags vk_create,
|
||||
const VkImageUsageFlags vk_usage);
|
||||
uint64_t getAndroidHardwareBufferUsageFromVkUsage(const VkImageCreateFlags vk_create,
|
||||
const VkImageUsageFlags vk_usage);
|
||||
|
||||
void updateMemoryTypeBits(uint32_t* memoryTypeBits, uint32_t colorBufferMemoryIndex);
|
||||
|
||||
|
|
|
|||
|
|
@ -1,18 +1,18 @@
|
|||
/*
|
||||
* Copyright (C) 2021 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
* Copyright (C) 2021 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
#include "CommandBufferStagingStream.h"
|
||||
|
||||
#if PLATFORM_SDK_VERSION < 26
|
||||
|
|
@ -204,37 +204,33 @@ void* CommandBufferStagingStream::allocBuffer(size_t minSize) {
|
|||
return (void*)(getDataPtr() + m_writePos);
|
||||
}
|
||||
|
||||
int CommandBufferStagingStream::commitBuffer(size_t size)
|
||||
{
|
||||
int CommandBufferStagingStream::commitBuffer(size_t size) {
|
||||
m_writePos += size;
|
||||
return 0;
|
||||
}
|
||||
|
||||
const unsigned char *CommandBufferStagingStream::readFully(void*, size_t) {
|
||||
const unsigned char* CommandBufferStagingStream::readFully(void*, size_t) {
|
||||
// Not supported
|
||||
ALOGE("CommandBufferStagingStream::%s: Fatal: not supported\n", __func__);
|
||||
abort();
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
const unsigned char *CommandBufferStagingStream::read(void*, size_t*) {
|
||||
const unsigned char* CommandBufferStagingStream::read(void*, size_t*) {
|
||||
// Not supported
|
||||
ALOGE("CommandBufferStagingStream::%s: Fatal: not supported\n", __func__);
|
||||
abort();
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
int CommandBufferStagingStream::writeFully(const void*, size_t)
|
||||
{
|
||||
int CommandBufferStagingStream::writeFully(const void*, size_t) {
|
||||
// Not supported
|
||||
ALOGE("CommandBufferStagingStream::%s: Fatal: not supported\n", __func__);
|
||||
abort();
|
||||
return 0;
|
||||
}
|
||||
|
||||
const unsigned char *CommandBufferStagingStream::commitBufferAndReadFully(
|
||||
size_t, void *, size_t) {
|
||||
|
||||
const unsigned char* CommandBufferStagingStream::commitBufferAndReadFully(size_t, void*, size_t) {
|
||||
// Not supported
|
||||
ALOGE("CommandBufferStagingStream::%s: Fatal: not supported\n", __func__);
|
||||
abort();
|
||||
|
|
|
|||
|
|
@ -1,18 +1,18 @@
|
|||
/*
|
||||
* Copyright (C) 2021 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
* Copyright (C) 2021 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
#ifndef __COMMAND_BUFFER_STAGING_STREAM_H
|
||||
#define __COMMAND_BUFFER_STAGING_STREAM_H
|
||||
|
||||
|
|
@ -26,91 +26,91 @@ namespace gfxstream {
|
|||
namespace vk {
|
||||
|
||||
class CommandBufferStagingStream : public gfxstream::guest::IOStream {
|
||||
public:
|
||||
// host will write kSyncDataReadComplete to the sync bytes to indicate memory is no longer being
|
||||
// used by host. This is only used with custom allocators. The sync bytes are used to ensure that,
|
||||
// during reallocations the guest does not free memory being read by the host. The guest ensures
|
||||
// that the sync bytes are marked as read complete before releasing the memory.
|
||||
static constexpr size_t kSyncDataSize = 8;
|
||||
// indicates read is complete
|
||||
static constexpr uint32_t kSyncDataReadComplete = 0X0;
|
||||
// indicates read is pending
|
||||
static constexpr uint32_t kSyncDataReadPending = 0X1;
|
||||
public:
|
||||
// host will write kSyncDataReadComplete to the sync bytes to indicate memory is no longer being
|
||||
// used by host. This is only used with custom allocators. The sync bytes are used to ensure
|
||||
// that, during reallocations the guest does not free memory being read by the host. The guest
|
||||
// ensures that the sync bytes are marked as read complete before releasing the memory.
|
||||
static constexpr size_t kSyncDataSize = 8;
|
||||
// indicates read is complete
|
||||
static constexpr uint32_t kSyncDataReadComplete = 0X0;
|
||||
// indicates read is pending
|
||||
static constexpr uint32_t kSyncDataReadPending = 0X1;
|
||||
|
||||
// \struct backing memory structure
|
||||
struct Memory {
|
||||
VkDeviceMemory deviceMemory =
|
||||
VK_NULL_HANDLE; // device memory associated with allocated memory
|
||||
void* ptr = nullptr; // pointer to allocated memory
|
||||
bool operator==(const Memory& rhs) const {
|
||||
return (deviceMemory == rhs.deviceMemory) && (ptr == rhs.ptr);
|
||||
}
|
||||
};
|
||||
// \struct backing memory structure
|
||||
struct Memory {
|
||||
VkDeviceMemory deviceMemory =
|
||||
VK_NULL_HANDLE; // device memory associated with allocated memory
|
||||
void* ptr = nullptr; // pointer to allocated memory
|
||||
bool operator==(const Memory& rhs) const {
|
||||
return (deviceMemory == rhs.deviceMemory) && (ptr == rhs.ptr);
|
||||
}
|
||||
};
|
||||
|
||||
// allocator
|
||||
// param size to allocate
|
||||
// return allocated memory
|
||||
using Alloc = std::function<Memory(size_t)>;
|
||||
// free function
|
||||
// param memory to free
|
||||
using Free = std::function<void(const Memory&)>;
|
||||
// constructor
|
||||
// \param allocFn is the allocation function provided.
|
||||
// \param freeFn is the free function provided
|
||||
explicit CommandBufferStagingStream(const Alloc& allocFn, const Free& freeFn);
|
||||
// constructor
|
||||
explicit CommandBufferStagingStream();
|
||||
~CommandBufferStagingStream();
|
||||
// allocator
|
||||
// param size to allocate
|
||||
// return allocated memory
|
||||
using Alloc = std::function<Memory(size_t)>;
|
||||
// free function
|
||||
// param memory to free
|
||||
using Free = std::function<void(const Memory&)>;
|
||||
// constructor
|
||||
// \param allocFn is the allocation function provided.
|
||||
// \param freeFn is the free function provided
|
||||
explicit CommandBufferStagingStream(const Alloc& allocFn, const Free& freeFn);
|
||||
// constructor
|
||||
explicit CommandBufferStagingStream();
|
||||
~CommandBufferStagingStream();
|
||||
|
||||
virtual size_t idealAllocSize(size_t len);
|
||||
virtual void* allocBuffer(size_t minSize);
|
||||
virtual int commitBuffer(size_t size);
|
||||
virtual const unsigned char* readFully(void* buf, size_t len);
|
||||
virtual const unsigned char* read(void* buf, size_t* inout_len);
|
||||
virtual int writeFully(const void* buf, size_t len);
|
||||
virtual const unsigned char* commitBufferAndReadFully(size_t size, void* buf, size_t len);
|
||||
virtual size_t idealAllocSize(size_t len);
|
||||
virtual void* allocBuffer(size_t minSize);
|
||||
virtual int commitBuffer(size_t size);
|
||||
virtual const unsigned char* readFully(void* buf, size_t len);
|
||||
virtual const unsigned char* read(void* buf, size_t* inout_len);
|
||||
virtual int writeFully(const void* buf, size_t len);
|
||||
virtual const unsigned char* commitBufferAndReadFully(size_t size, void* buf, size_t len);
|
||||
|
||||
void getWritten(unsigned char** bufOut, size_t* sizeOut);
|
||||
void reset();
|
||||
void getWritten(unsigned char** bufOut, size_t* sizeOut);
|
||||
void reset();
|
||||
|
||||
// marks the command buffer stream as flushing. The owner of CommandBufferStagingStream
|
||||
// should call markFlushing after finishing writing to the stream.
|
||||
// This will mark the sync data to kSyncDataReadPending. This is only applicable when
|
||||
// using custom allocators. markFlushing will be a no-op if called
|
||||
// when not using custom allocators
|
||||
void markFlushing();
|
||||
// marks the command buffer stream as flushing. The owner of CommandBufferStagingStream
|
||||
// should call markFlushing after finishing writing to the stream.
|
||||
// This will mark the sync data to kSyncDataReadPending. This is only applicable when
|
||||
// using custom allocators. markFlushing will be a no-op if called
|
||||
// when not using custom allocators
|
||||
void markFlushing();
|
||||
|
||||
// gets the device memory associated with the stream. This is VK_NULL_HANDLE for default allocation
|
||||
// \return device memory
|
||||
VkDeviceMemory getDeviceMemory();
|
||||
// gets the device memory associated with the stream. This is VK_NULL_HANDLE for default
|
||||
// allocation \return device memory
|
||||
VkDeviceMemory getDeviceMemory();
|
||||
|
||||
private:
|
||||
// underlying memory for data
|
||||
Memory m_mem;
|
||||
// size of portion of memory available for data.
|
||||
// for custom allocation, this size excludes size of sync data.
|
||||
size_t m_size;
|
||||
// current write position in data buffer
|
||||
uint32_t m_writePos;
|
||||
private:
|
||||
// underlying memory for data
|
||||
Memory m_mem;
|
||||
// size of portion of memory available for data.
|
||||
// for custom allocation, this size excludes size of sync data.
|
||||
size_t m_size;
|
||||
// current write position in data buffer
|
||||
uint32_t m_writePos;
|
||||
|
||||
// alloc function
|
||||
Alloc m_alloc;
|
||||
// free function
|
||||
Free m_free;
|
||||
// alloc function
|
||||
Alloc m_alloc;
|
||||
// free function
|
||||
Free m_free;
|
||||
|
||||
// realloc function
|
||||
// \param size of memory to be allocated
|
||||
// \ param reference size to update with actual size allocated. This size can be < requested size
|
||||
// for custom allocation to account for sync data
|
||||
using Realloc = std::function<Memory(const Memory&, size_t)>;
|
||||
Realloc m_realloc;
|
||||
// realloc function
|
||||
// \param size of memory to be allocated
|
||||
// \ param reference size to update with actual size allocated. This size can be < requested
|
||||
// size for custom allocation to account for sync data
|
||||
using Realloc = std::function<Memory(const Memory&, size_t)>;
|
||||
Realloc m_realloc;
|
||||
|
||||
// flag tracking use of custom allocation/free
|
||||
bool m_usingCustomAlloc = false;
|
||||
// flag tracking use of custom allocation/free
|
||||
bool m_usingCustomAlloc = false;
|
||||
|
||||
// adjusted memory location to point to start of data after accounting for metadata
|
||||
// \return pointer to data start
|
||||
unsigned char* getDataPtr();
|
||||
// adjusted memory location to point to start of data after accounting for metadata
|
||||
// \return pointer to data start
|
||||
unsigned char* getDataPtr();
|
||||
};
|
||||
|
||||
} // namespace vk
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#include "DescriptorSetVirtualization.h"
|
||||
|
||||
#include "Resources.h"
|
||||
|
||||
namespace gfxstream {
|
||||
|
|
@ -27,7 +28,8 @@ void clearReifiedDescriptorSet(ReifiedDescriptorSet* set) {
|
|||
set->pendingWriteArrayRanges.clear();
|
||||
}
|
||||
|
||||
void initDescriptorWriteTable(const std::vector<VkDescriptorSetLayoutBinding>& layoutBindings, DescriptorWriteTable& table) {
|
||||
void initDescriptorWriteTable(const std::vector<VkDescriptorSetLayoutBinding>& layoutBindings,
|
||||
DescriptorWriteTable& table) {
|
||||
uint32_t highestBindingNumber = 0;
|
||||
|
||||
for (uint32_t i = 0; i < layoutBindings.size(); ++i) {
|
||||
|
|
@ -39,8 +41,7 @@ void initDescriptorWriteTable(const std::vector<VkDescriptorSetLayoutBinding>& l
|
|||
std::vector<uint32_t> countsEachBinding(highestBindingNumber + 1, 0);
|
||||
|
||||
for (uint32_t i = 0; i < layoutBindings.size(); ++i) {
|
||||
countsEachBinding[layoutBindings[i].binding] =
|
||||
layoutBindings[i].descriptorCount;
|
||||
countsEachBinding[layoutBindings[i].binding] = layoutBindings[i].descriptorCount;
|
||||
}
|
||||
|
||||
table.resize(countsEachBinding.size());
|
||||
|
|
@ -55,8 +56,8 @@ void initDescriptorWriteTable(const std::vector<VkDescriptorSetLayoutBinding>& l
|
|||
}
|
||||
}
|
||||
|
||||
static void initializeReifiedDescriptorSet(VkDescriptorPool pool, VkDescriptorSetLayout setLayout, ReifiedDescriptorSet* set) {
|
||||
|
||||
static void initializeReifiedDescriptorSet(VkDescriptorPool pool, VkDescriptorSetLayout setLayout,
|
||||
ReifiedDescriptorSet* set) {
|
||||
set->pendingWriteArrayRanges.clear();
|
||||
|
||||
const auto& layoutInfo = *(as_goldfish_VkDescriptorSetLayout(setLayout)->layoutInfo);
|
||||
|
|
@ -73,8 +74,7 @@ static void initializeReifiedDescriptorSet(VkDescriptorPool pool, VkDescriptorSe
|
|||
set->bindingIsImmutableSampler[bindingIndex] =
|
||||
binding.descriptorCount > 0 &&
|
||||
(binding.descriptorType == VK_DESCRIPTOR_TYPE_SAMPLER ||
|
||||
binding.descriptorType ==
|
||||
VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER) &&
|
||||
binding.descriptorType == VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER) &&
|
||||
binding.pImmutableSamplers;
|
||||
}
|
||||
|
||||
|
|
@ -185,7 +185,8 @@ void doEmulatedDescriptorWrite(const VkWriteDescriptorSet* write, ReifiedDescrip
|
|||
}
|
||||
}
|
||||
|
||||
void doEmulatedDescriptorCopy(const VkCopyDescriptorSet* copy, const ReifiedDescriptorSet* src, ReifiedDescriptorSet* dst) {
|
||||
void doEmulatedDescriptorCopy(const VkCopyDescriptorSet* copy, const ReifiedDescriptorSet* src,
|
||||
ReifiedDescriptorSet* dst) {
|
||||
const DescriptorWriteTable& srcTable = src->allWrites;
|
||||
DescriptorWriteTable& dstTable = dst->allWrites;
|
||||
|
||||
|
|
@ -214,14 +215,10 @@ void doEmulatedDescriptorCopy(const VkCopyDescriptorSet* copy, const ReifiedDesc
|
|||
}
|
||||
}
|
||||
|
||||
void doEmulatedDescriptorImageInfoWriteFromTemplate(
|
||||
VkDescriptorType descType,
|
||||
uint32_t binding,
|
||||
uint32_t dstArrayElement,
|
||||
uint32_t count,
|
||||
const VkDescriptorImageInfo* imageInfos,
|
||||
ReifiedDescriptorSet* set) {
|
||||
|
||||
void doEmulatedDescriptorImageInfoWriteFromTemplate(VkDescriptorType descType, uint32_t binding,
|
||||
uint32_t dstArrayElement, uint32_t count,
|
||||
const VkDescriptorImageInfo* imageInfos,
|
||||
ReifiedDescriptorSet* set) {
|
||||
DescriptorWriteTable& table = set->allWrites;
|
||||
|
||||
uint32_t currBinding = binding;
|
||||
|
|
@ -239,14 +236,10 @@ void doEmulatedDescriptorImageInfoWriteFromTemplate(
|
|||
}
|
||||
}
|
||||
|
||||
void doEmulatedDescriptorBufferInfoWriteFromTemplate(
|
||||
VkDescriptorType descType,
|
||||
uint32_t binding,
|
||||
uint32_t dstArrayElement,
|
||||
uint32_t count,
|
||||
const VkDescriptorBufferInfo* bufferInfos,
|
||||
ReifiedDescriptorSet* set) {
|
||||
|
||||
void doEmulatedDescriptorBufferInfoWriteFromTemplate(VkDescriptorType descType, uint32_t binding,
|
||||
uint32_t dstArrayElement, uint32_t count,
|
||||
const VkDescriptorBufferInfo* bufferInfos,
|
||||
ReifiedDescriptorSet* set) {
|
||||
DescriptorWriteTable& table = set->allWrites;
|
||||
|
||||
uint32_t currBinding = binding;
|
||||
|
|
@ -264,14 +257,10 @@ void doEmulatedDescriptorBufferInfoWriteFromTemplate(
|
|||
}
|
||||
}
|
||||
|
||||
void doEmulatedDescriptorBufferViewWriteFromTemplate(
|
||||
VkDescriptorType descType,
|
||||
uint32_t binding,
|
||||
uint32_t dstArrayElement,
|
||||
uint32_t count,
|
||||
const VkBufferView* bufferViews,
|
||||
ReifiedDescriptorSet* set) {
|
||||
|
||||
void doEmulatedDescriptorBufferViewWriteFromTemplate(VkDescriptorType descType, uint32_t binding,
|
||||
uint32_t dstArrayElement, uint32_t count,
|
||||
const VkBufferView* bufferViews,
|
||||
ReifiedDescriptorSet* set) {
|
||||
DescriptorWriteTable& table = set->allWrites;
|
||||
|
||||
uint32_t currBinding = binding;
|
||||
|
|
@ -305,22 +294,19 @@ void doEmulatedDescriptorInlineUniformBlockFromTemplate(VkDescriptorType descTyp
|
|||
static bool isBindingFeasibleForAlloc(
|
||||
const DescriptorPoolAllocationInfo::DescriptorCountInfo& countInfo,
|
||||
const VkDescriptorSetLayoutBinding& binding) {
|
||||
|
||||
if (binding.descriptorCount && (countInfo.type != binding.descriptorType)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
uint32_t availDescriptorCount =
|
||||
countInfo.descriptorCount - countInfo.used;
|
||||
uint32_t availDescriptorCount = countInfo.descriptorCount - countInfo.used;
|
||||
|
||||
if (availDescriptorCount < binding.descriptorCount) {
|
||||
ALOGV("%s: Ran out of descriptors of type 0x%x. "
|
||||
"Wanted %u from layout but "
|
||||
"we only have %u free (total in pool: %u)\n", __func__,
|
||||
binding.descriptorType,
|
||||
binding.descriptorCount,
|
||||
countInfo.descriptorCount - countInfo.used,
|
||||
countInfo.descriptorCount);
|
||||
ALOGV(
|
||||
"%s: Ran out of descriptors of type 0x%x. "
|
||||
"Wanted %u from layout but "
|
||||
"we only have %u free (total in pool: %u)\n",
|
||||
__func__, binding.descriptorType, binding.descriptorCount,
|
||||
countInfo.descriptorCount - countInfo.used, countInfo.descriptorCount);
|
||||
return false;
|
||||
}
|
||||
|
||||
|
|
@ -330,31 +316,27 @@ static bool isBindingFeasibleForAlloc(
|
|||
static bool isBindingFeasibleForFree(
|
||||
const DescriptorPoolAllocationInfo::DescriptorCountInfo& countInfo,
|
||||
const VkDescriptorSetLayoutBinding& binding) {
|
||||
|
||||
if (countInfo.type != binding.descriptorType) return false;
|
||||
if (countInfo.used < binding.descriptorCount) {
|
||||
ALOGV("%s: Was a descriptor set double freed? "
|
||||
"Ran out of descriptors of type 0x%x. "
|
||||
"Wanted to free %u from layout but "
|
||||
"we only have %u used (total in pool: %u)\n", __func__,
|
||||
binding.descriptorType,
|
||||
binding.descriptorCount,
|
||||
countInfo.used,
|
||||
countInfo.descriptorCount);
|
||||
ALOGV(
|
||||
"%s: Was a descriptor set double freed? "
|
||||
"Ran out of descriptors of type 0x%x. "
|
||||
"Wanted to free %u from layout but "
|
||||
"we only have %u used (total in pool: %u)\n",
|
||||
__func__, binding.descriptorType, binding.descriptorCount, countInfo.used,
|
||||
countInfo.descriptorCount);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static void allocBindingFeasible(
|
||||
const VkDescriptorSetLayoutBinding& binding,
|
||||
DescriptorPoolAllocationInfo::DescriptorCountInfo& poolState) {
|
||||
static void allocBindingFeasible(const VkDescriptorSetLayoutBinding& binding,
|
||||
DescriptorPoolAllocationInfo::DescriptorCountInfo& poolState) {
|
||||
poolState.used += binding.descriptorCount;
|
||||
}
|
||||
|
||||
static void freeBindingFeasible(
|
||||
const VkDescriptorSetLayoutBinding& binding,
|
||||
DescriptorPoolAllocationInfo::DescriptorCountInfo& poolState) {
|
||||
static void freeBindingFeasible(const VkDescriptorSetLayoutBinding& binding,
|
||||
DescriptorPoolAllocationInfo::DescriptorCountInfo& poolState) {
|
||||
poolState.used -= binding.descriptorCount;
|
||||
}
|
||||
|
||||
|
|
@ -366,11 +348,11 @@ static VkResult validateDescriptorSetAllocation(const VkDescriptorSetAllocateInf
|
|||
auto setsAvailable = poolInfo->maxSets - poolInfo->usedSets;
|
||||
|
||||
if (setsAvailable < pAllocateInfo->descriptorSetCount) {
|
||||
ALOGV("%s: Error: VkDescriptorSetAllocateInfo wants %u sets "
|
||||
"but we only have %u available. "
|
||||
"Bailing with VK_ERROR_OUT_OF_POOL_MEMORY.\n", __func__,
|
||||
pAllocateInfo->descriptorSetCount,
|
||||
setsAvailable);
|
||||
ALOGV(
|
||||
"%s: Error: VkDescriptorSetAllocateInfo wants %u sets "
|
||||
"but we only have %u available. "
|
||||
"Bailing with VK_ERROR_OUT_OF_POOL_MEMORY.\n",
|
||||
__func__, pAllocateInfo->descriptorSetCount, setsAvailable);
|
||||
return VK_ERROR_OUT_OF_POOL_MEMORY;
|
||||
}
|
||||
|
||||
|
|
@ -381,11 +363,13 @@ static VkResult validateDescriptorSetAllocation(const VkDescriptorSetAllocateInf
|
|||
|
||||
for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; ++i) {
|
||||
if (!pAllocateInfo->pSetLayouts[i]) {
|
||||
ALOGV("%s: Error: Tried to allocate a descriptor set with null set layout.\n", __func__);
|
||||
ALOGV("%s: Error: Tried to allocate a descriptor set with null set layout.\n",
|
||||
__func__);
|
||||
return VK_ERROR_INITIALIZATION_FAILED;
|
||||
}
|
||||
|
||||
auto setLayoutInfo = as_goldfish_VkDescriptorSetLayout(pAllocateInfo->pSetLayouts[i])->layoutInfo;
|
||||
auto setLayoutInfo =
|
||||
as_goldfish_VkDescriptorSetLayout(pAllocateInfo->pSetLayouts[i])->layoutInfo;
|
||||
if (!setLayoutInfo) {
|
||||
return VK_ERROR_INITIALIZATION_FAILED;
|
||||
}
|
||||
|
|
@ -423,7 +407,8 @@ void applyDescriptorSetAllocation(VkDescriptorPool pool, VkDescriptorSetLayout s
|
|||
}
|
||||
}
|
||||
|
||||
void removeDescriptorSetAllocation(VkDescriptorPool pool, const std::vector<VkDescriptorSetLayoutBinding>& bindings) {
|
||||
void removeDescriptorSetAllocation(VkDescriptorPool pool,
|
||||
const std::vector<VkDescriptorSetLayoutBinding>& bindings) {
|
||||
auto allocInfo = as_goldfish_VkDescriptorPool(pool)->allocInfo;
|
||||
|
||||
if (0 == allocInfo->usedSets) {
|
||||
|
|
@ -442,7 +427,8 @@ void removeDescriptorSetAllocation(VkDescriptorPool pool, const std::vector<VkDe
|
|||
}
|
||||
}
|
||||
|
||||
void fillDescriptorSetInfoForPool(VkDescriptorPool pool, VkDescriptorSetLayout setLayout, VkDescriptorSet set) {
|
||||
void fillDescriptorSetInfoForPool(VkDescriptorPool pool, VkDescriptorSetLayout setLayout,
|
||||
VkDescriptorSet set) {
|
||||
DescriptorPoolAllocationInfo* allocInfo = as_goldfish_VkDescriptorPool(pool)->allocInfo;
|
||||
|
||||
ReifiedDescriptorSet* newReified = new ReifiedDescriptorSet;
|
||||
|
|
@ -457,7 +443,8 @@ void fillDescriptorSetInfoForPool(VkDescriptorPool pool, VkDescriptorSetLayout s
|
|||
initializeReifiedDescriptorSet(pool, setLayout, newReified);
|
||||
}
|
||||
|
||||
VkResult validateAndApplyVirtualDescriptorSetAllocation(const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pSets) {
|
||||
VkResult validateAndApplyVirtualDescriptorSetAllocation(
|
||||
const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pSets) {
|
||||
VkResult validateRes = validateDescriptorSetAllocation(pAllocateInfo);
|
||||
|
||||
if (validateRes != VK_SUCCESS) return validateRes;
|
||||
|
|
@ -470,15 +457,15 @@ VkResult validateAndApplyVirtualDescriptorSetAllocation(const VkDescriptorSetAll
|
|||
DescriptorPoolAllocationInfo* allocInfo = as_goldfish_VkDescriptorPool(pool)->allocInfo;
|
||||
|
||||
if (allocInfo->freePoolIds.size() < pAllocateInfo->descriptorSetCount) {
|
||||
ALOGE("%s: FATAL: Somehow out of descriptor pool IDs. Wanted %u IDs but only have %u free IDs remaining. The count for maxSets was %u and used was %u\n", __func__,
|
||||
pAllocateInfo->descriptorSetCount,
|
||||
(uint32_t)allocInfo->freePoolIds.size(),
|
||||
allocInfo->maxSets,
|
||||
allocInfo->usedSets);
|
||||
ALOGE(
|
||||
"%s: FATAL: Somehow out of descriptor pool IDs. Wanted %u IDs but only have %u free "
|
||||
"IDs remaining. The count for maxSets was %u and used was %u\n",
|
||||
__func__, pAllocateInfo->descriptorSetCount, (uint32_t)allocInfo->freePoolIds.size(),
|
||||
allocInfo->maxSets, allocInfo->usedSets);
|
||||
abort();
|
||||
}
|
||||
|
||||
for (uint32_t i = 0 ; i < pAllocateInfo->descriptorSetCount; ++i) {
|
||||
for (uint32_t i = 0; i < pAllocateInfo->descriptorSetCount; ++i) {
|
||||
uint64_t id = allocInfo->freePoolIds.back();
|
||||
allocInfo->freePoolIds.pop_back();
|
||||
|
||||
|
|
@ -498,7 +485,8 @@ bool removeDescriptorSetFromPool(VkDescriptorSet set, bool usePoolIds) {
|
|||
DescriptorPoolAllocationInfo* allocInfo = as_goldfish_VkDescriptorPool(pool)->allocInfo;
|
||||
|
||||
if (usePoolIds) {
|
||||
// Look for the set's pool Id in the pool. If not found, then this wasn't really allocated, and bail.
|
||||
// Look for the set's pool Id in the pool. If not found, then this wasn't really allocated,
|
||||
// and bail.
|
||||
if (allocInfo->allocedPoolIds.find(reified->poolId) == allocInfo->allocedPoolIds.end()) {
|
||||
return false;
|
||||
}
|
||||
|
|
@ -522,7 +510,7 @@ std::vector<VkDescriptorSet> clearDescriptorPool(VkDescriptorPool pool, bool use
|
|||
toClear.push_back(set);
|
||||
}
|
||||
|
||||
for (auto set: toClear) {
|
||||
for (auto set : toClear) {
|
||||
removeDescriptorSetFromPool(set, usePoolIds);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -14,13 +14,13 @@
|
|||
// limitations under the License.
|
||||
#pragma once
|
||||
|
||||
#include "aemu/base/containers/EntityManager.h"
|
||||
|
||||
#include <vulkan/vulkan.h>
|
||||
|
||||
#include <unordered_set>
|
||||
#include <vector>
|
||||
|
||||
#include "aemu/base/containers/EntityManager.h"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
||||
|
|
@ -37,7 +37,7 @@ struct DescriptorWrite {
|
|||
DescriptorWriteType type;
|
||||
VkDescriptorType descriptorType;
|
||||
|
||||
uint32_t dstArrayElement; // Only used for inlineUniformBlock and accelerationStructure.
|
||||
uint32_t dstArrayElement; // Only used for inlineUniformBlock and accelerationStructure.
|
||||
|
||||
union {
|
||||
VkDescriptorImageInfo imageInfo;
|
||||
|
|
@ -105,7 +105,8 @@ struct DescriptorSetLayoutInfo {
|
|||
|
||||
void clearReifiedDescriptorSet(ReifiedDescriptorSet* set);
|
||||
|
||||
void initDescriptorWriteTable(const std::vector<VkDescriptorSetLayoutBinding>& layoutBindings, DescriptorWriteTable& table);
|
||||
void initDescriptorWriteTable(const std::vector<VkDescriptorSetLayoutBinding>& layoutBindings,
|
||||
DescriptorWriteTable& table);
|
||||
|
||||
bool isDescriptorTypeImageInfo(VkDescriptorType descType);
|
||||
bool isDescriptorTypeBufferInfo(VkDescriptorType descType);
|
||||
|
|
@ -114,31 +115,23 @@ bool isDescriptorTypeInlineUniformBlock(VkDescriptorType descType);
|
|||
bool isDescriptorTypeAccelerationStructure(VkDescriptorType descType);
|
||||
|
||||
void doEmulatedDescriptorWrite(const VkWriteDescriptorSet* write, ReifiedDescriptorSet* toWrite);
|
||||
void doEmulatedDescriptorCopy(const VkCopyDescriptorSet* copy, const ReifiedDescriptorSet* src, ReifiedDescriptorSet* dst);
|
||||
void doEmulatedDescriptorCopy(const VkCopyDescriptorSet* copy, const ReifiedDescriptorSet* src,
|
||||
ReifiedDescriptorSet* dst);
|
||||
|
||||
void doEmulatedDescriptorImageInfoWriteFromTemplate(
|
||||
VkDescriptorType descType,
|
||||
uint32_t binding,
|
||||
uint32_t dstArrayElement,
|
||||
uint32_t count,
|
||||
const VkDescriptorImageInfo* imageInfos,
|
||||
ReifiedDescriptorSet* set);
|
||||
void doEmulatedDescriptorImageInfoWriteFromTemplate(VkDescriptorType descType, uint32_t binding,
|
||||
uint32_t dstArrayElement, uint32_t count,
|
||||
const VkDescriptorImageInfo* imageInfos,
|
||||
ReifiedDescriptorSet* set);
|
||||
|
||||
void doEmulatedDescriptorBufferInfoWriteFromTemplate(
|
||||
VkDescriptorType descType,
|
||||
uint32_t binding,
|
||||
uint32_t dstArrayElement,
|
||||
uint32_t count,
|
||||
const VkDescriptorBufferInfo* bufferInfos,
|
||||
ReifiedDescriptorSet* set);
|
||||
void doEmulatedDescriptorBufferInfoWriteFromTemplate(VkDescriptorType descType, uint32_t binding,
|
||||
uint32_t dstArrayElement, uint32_t count,
|
||||
const VkDescriptorBufferInfo* bufferInfos,
|
||||
ReifiedDescriptorSet* set);
|
||||
|
||||
void doEmulatedDescriptorBufferViewWriteFromTemplate(
|
||||
VkDescriptorType descType,
|
||||
uint32_t binding,
|
||||
uint32_t dstArrayElement,
|
||||
uint32_t count,
|
||||
const VkBufferView* bufferViews,
|
||||
ReifiedDescriptorSet* set);
|
||||
void doEmulatedDescriptorBufferViewWriteFromTemplate(VkDescriptorType descType, uint32_t binding,
|
||||
uint32_t dstArrayElement, uint32_t count,
|
||||
const VkBufferView* bufferViews,
|
||||
ReifiedDescriptorSet* set);
|
||||
|
||||
void doEmulatedDescriptorInlineUniformBlockFromTemplate(VkDescriptorType descType, uint32_t binding,
|
||||
uint32_t dstArrayElement, uint32_t count,
|
||||
|
|
@ -146,8 +139,10 @@ void doEmulatedDescriptorInlineUniformBlockFromTemplate(VkDescriptorType descTyp
|
|||
ReifiedDescriptorSet* set);
|
||||
|
||||
void applyDescriptorSetAllocation(VkDescriptorPool pool, VkDescriptorSetLayout setLayout);
|
||||
void fillDescriptorSetInfoForPool(VkDescriptorPool pool, VkDescriptorSetLayout setLayout, VkDescriptorSet set);
|
||||
VkResult validateAndApplyVirtualDescriptorSetAllocation(const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pSets);
|
||||
void fillDescriptorSetInfoForPool(VkDescriptorPool pool, VkDescriptorSetLayout setLayout,
|
||||
VkDescriptorSet set);
|
||||
VkResult validateAndApplyVirtualDescriptorSetAllocation(
|
||||
const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pSets);
|
||||
|
||||
// Returns false if set wasn't found in its pool.
|
||||
bool removeDescriptorSetFromPool(VkDescriptorSet set, bool usePoolIds);
|
||||
|
|
|
|||
|
|
@ -45,10 +45,9 @@ CoherentMemory::CoherentMemory(GoldfishAddressSpaceBlockPtr block, uint64_t gpuA
|
|||
VkDevice device, VkDeviceMemory memory)
|
||||
: mSize(size), mBlock(block), mDevice(device), mMemory(memory) {
|
||||
void* address = block->mmap(gpuAddr);
|
||||
mAllocator =
|
||||
std::make_unique<gfxstream::guest::SubAllocator>(address, mSize, kLargestPageSize);
|
||||
mAllocator = std::make_unique<gfxstream::guest::SubAllocator>(address, mSize, kLargestPageSize);
|
||||
}
|
||||
#endif // defined(__ANDROID__)
|
||||
#endif // defined(__ANDROID__)
|
||||
|
||||
CoherentMemory::~CoherentMemory() {
|
||||
ResourceTracker::getThreadLocalEncoder()->vkFreeMemorySyncGOOGLE(mDevice, mMemory, nullptr,
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ class CoherentMemory {
|
|||
#if defined(__ANDROID__)
|
||||
CoherentMemory(GoldfishAddressSpaceBlockPtr block, uint64_t gpuAddr, uint64_t size,
|
||||
VkDevice device, VkDeviceMemory memory);
|
||||
#endif // defined(__ANDROID__)
|
||||
#endif // defined(__ANDROID__)
|
||||
|
||||
~CoherentMemory();
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@
|
|||
#include "Resources.h"
|
||||
#include "VkEncoder.h"
|
||||
#include "aemu/base/AlignedBuf.h"
|
||||
#include "gfxstream_vk_private.h"
|
||||
#include "goldfish_address_space.h"
|
||||
#include "goldfish_vk_private_defs.h"
|
||||
#include "util.h"
|
||||
|
|
@ -200,10 +201,10 @@ struct CommandBufferPendingDescriptorSets {
|
|||
std::unordered_set<VkDescriptorSet> sets;
|
||||
};
|
||||
|
||||
#define HANDLE_REGISTER_IMPL_IMPL(type) \
|
||||
void ResourceTracker::register_##type(type obj) { \
|
||||
AutoLock<RecursiveLock> lock(mLock); \
|
||||
info_##type[obj] = type##_Info(); \
|
||||
#define HANDLE_REGISTER_IMPL_IMPL(type) \
|
||||
void ResourceTracker::register_##type(type obj) { \
|
||||
AutoLock<RecursiveLock> lock(mLock); \
|
||||
info_##type[obj] = type##_Info(); \
|
||||
}
|
||||
|
||||
#define HANDLE_UNREGISTER_IMPL_IMPL(type) \
|
||||
|
|
@ -286,7 +287,8 @@ bool descriptorBindingIsImmutableSampler(VkDescriptorSet dstSet, uint32_t dstBin
|
|||
return as_goldfish_VkDescriptorSet(dstSet)->reified->bindingIsImmutableSampler[dstBinding];
|
||||
}
|
||||
|
||||
VkDescriptorImageInfo ResourceTracker::filterNonexistentSampler(const VkDescriptorImageInfo& inputInfo) {
|
||||
VkDescriptorImageInfo ResourceTracker::filterNonexistentSampler(
|
||||
const VkDescriptorImageInfo& inputInfo) {
|
||||
VkSampler sampler = inputInfo.sampler;
|
||||
|
||||
VkDescriptorImageInfo res = inputInfo;
|
||||
|
|
@ -300,9 +302,11 @@ VkDescriptorImageInfo ResourceTracker::filterNonexistentSampler(const VkDescript
|
|||
return res;
|
||||
}
|
||||
|
||||
void ResourceTracker::emitDeviceMemoryReport(VkDevice_Info info, VkDeviceMemoryReportEventTypeEXT type,
|
||||
uint64_t memoryObjectId, VkDeviceSize size, VkObjectType objectType,
|
||||
uint64_t objectHandle, uint32_t heapIndex) {
|
||||
void ResourceTracker::emitDeviceMemoryReport(VkDevice_Info info,
|
||||
VkDeviceMemoryReportEventTypeEXT type,
|
||||
uint64_t memoryObjectId, VkDeviceSize size,
|
||||
VkObjectType objectType, uint64_t objectHandle,
|
||||
uint32_t heapIndex) {
|
||||
if (info.deviceMemoryReportCallbacks.empty()) return;
|
||||
|
||||
const VkDeviceMemoryReportCallbackDataEXT callbackData = {
|
||||
|
|
@ -640,9 +644,10 @@ VkResult addImageBufferCollectionConstraintsFUCHSIA(
|
|||
createInfoDup.pNext = nullptr;
|
||||
enc->vkGetLinearImageLayout2GOOGLE(device, &createInfoDup, &offset, &rowPitchAlignment,
|
||||
true /* do lock */);
|
||||
ALOGD("vkGetLinearImageLayout2GOOGLE: format %d offset %lu "
|
||||
"rowPitchAlignment = %lu",
|
||||
(int)createInfo->format, offset, rowPitchAlignment);
|
||||
ALOGD(
|
||||
"vkGetLinearImageLayout2GOOGLE: format %d offset %lu "
|
||||
"rowPitchAlignment = %lu",
|
||||
(int)createInfo->format, offset, rowPitchAlignment);
|
||||
}
|
||||
|
||||
imageConstraints.min_coded_width = createInfo->extent.width;
|
||||
|
|
@ -721,8 +726,8 @@ void transformExternalResourceMemoryDedicatedRequirementsForGuest(
|
|||
dedicatedReqs->requiresDedicatedAllocation = VK_TRUE;
|
||||
}
|
||||
|
||||
void ResourceTracker::setMemoryRequirementsForSysmemBackedImage(VkImage image,
|
||||
VkMemoryRequirements* pMemoryRequirements) {
|
||||
void ResourceTracker::transformImageMemoryRequirementsForGuestLocked(VkImage image,
|
||||
VkMemoryRequirements* reqs) {
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
auto it = info_VkImage.find(image);
|
||||
if (it == info_VkImage.end()) return;
|
||||
|
|
@ -730,21 +735,25 @@ void ResourceTracker::setMemoryRequirementsForSysmemBackedImage(VkImage image,
|
|||
if (info.isSysmemBackedMemory) {
|
||||
auto width = info.createInfo.extent.width;
|
||||
auto height = info.createInfo.extent.height;
|
||||
pMemoryRequirements->size = width * height * 4;
|
||||
reqs->size = width * height * 4;
|
||||
}
|
||||
#elif defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
auto it = info_VkImage.find(image);
|
||||
if (it == info_VkImage.end()) return;
|
||||
auto& info = it->second;
|
||||
if (info.isWsiImage) {
|
||||
static const uint32_t kColorBufferBpp = 4;
|
||||
reqs->size = kColorBufferBpp * info.createInfo.extent.width * info.createInfo.extent.height;
|
||||
}
|
||||
#else
|
||||
// Bypass "unused parameter" checks.
|
||||
(void)image;
|
||||
(void)pMemoryRequirements;
|
||||
(void)reqs;
|
||||
#endif
|
||||
}
|
||||
|
||||
void ResourceTracker::transformImageMemoryRequirementsForGuestLocked(VkImage image,
|
||||
VkMemoryRequirements* reqs) {
|
||||
setMemoryRequirementsForSysmemBackedImage(image, reqs);
|
||||
}
|
||||
|
||||
CoherentMemoryPtr ResourceTracker::freeCoherentMemoryLocked(VkDeviceMemory memory, VkDeviceMemory_Info& info) {
|
||||
CoherentMemoryPtr ResourceTracker::freeCoherentMemoryLocked(VkDeviceMemory memory,
|
||||
VkDeviceMemory_Info& info) {
|
||||
if (info.coherentMemory && info.ptr) {
|
||||
if (info.coherentMemory->getDeviceMemory() != memory) {
|
||||
delete_goldfish_VkDeviceMemory(memory);
|
||||
|
|
@ -1341,7 +1350,8 @@ void ResourceTracker::setDeviceInfo(VkDevice device, VkPhysicalDevice physdev,
|
|||
void ResourceTracker::setDeviceMemoryInfo(VkDevice device, VkDeviceMemory memory,
|
||||
VkDeviceSize allocationSize, uint8_t* ptr,
|
||||
uint32_t memoryTypeIndex, AHardwareBuffer* ahw,
|
||||
bool imported, zx_handle_t vmoHandle) {
|
||||
bool imported, zx_handle_t vmoHandle,
|
||||
VirtGpuBlobPtr blobPtr) {
|
||||
AutoLock<RecursiveLock> lock(mLock);
|
||||
auto& info = info_VkDeviceMemory[memory];
|
||||
|
||||
|
|
@ -1354,6 +1364,7 @@ void ResourceTracker::setDeviceMemoryInfo(VkDevice device, VkDeviceMemory memory
|
|||
#endif
|
||||
info.imported = imported;
|
||||
info.vmoHandle = vmoHandle;
|
||||
info.blobPtr = blobPtr;
|
||||
}
|
||||
|
||||
void ResourceTracker::setImageInfo(VkImage image, VkDevice device,
|
||||
|
|
@ -1763,6 +1774,7 @@ VkResult ResourceTracker::on_vkEnumerateDeviceExtensionProperties(
|
|||
"VK_EXT_scalar_block_layout",
|
||||
"VK_KHR_descriptor_update_template",
|
||||
"VK_KHR_storage_buffer_storage_class",
|
||||
"VK_EXT_depth_clip_enable",
|
||||
#if defined(VK_USE_PLATFORM_ANDROID_KHR) || defined(__linux__)
|
||||
"VK_KHR_external_semaphore",
|
||||
"VK_KHR_external_semaphore_fd",
|
||||
|
|
@ -1772,7 +1784,7 @@ VkResult ResourceTracker::on_vkEnumerateDeviceExtensionProperties(
|
|||
"VK_KHR_external_fence_fd",
|
||||
"VK_EXT_device_memory_report",
|
||||
#endif
|
||||
#if !defined(VK_USE_PLATFORM_ANDROID_KHR) && defined(__linux__)
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
"VK_KHR_create_renderpass2",
|
||||
"VK_KHR_imageless_framebuffer",
|
||||
#endif
|
||||
|
|
@ -1847,9 +1859,11 @@ VkResult ResourceTracker::on_vkEnumerateDeviceExtensionProperties(
|
|||
bool win32ExtMemAvailable = getHostDeviceExtensionIndex("VK_KHR_external_memory_win32") != -1;
|
||||
bool posixExtMemAvailable = getHostDeviceExtensionIndex("VK_KHR_external_memory_fd") != -1;
|
||||
bool moltenVkExtAvailable = getHostDeviceExtensionIndex("VK_MVK_moltenvk") != -1;
|
||||
bool qnxExtMemAvailable =
|
||||
getHostDeviceExtensionIndex("VK_QNX_external_memory_screen_buffer") != -1;
|
||||
|
||||
bool hostHasExternalMemorySupport =
|
||||
win32ExtMemAvailable || posixExtMemAvailable || moltenVkExtAvailable;
|
||||
win32ExtMemAvailable || posixExtMemAvailable || moltenVkExtAvailable || qnxExtMemAvailable;
|
||||
|
||||
if (hostHasExternalMemorySupport) {
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
|
|
@ -2029,7 +2043,19 @@ VkResult ResourceTracker::on_vkEnumeratePhysicalDevices(void* context, VkResult,
|
|||
}
|
||||
|
||||
void ResourceTracker::on_vkGetPhysicalDeviceProperties(void*, VkPhysicalDevice,
|
||||
VkPhysicalDeviceProperties*) {}
|
||||
VkPhysicalDeviceProperties* pProperties) {
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
if (pProperties) {
|
||||
if (VK_PHYSICAL_DEVICE_TYPE_CPU == pProperties->deviceType) {
|
||||
/* For Linux guest: Even if host driver reports DEVICE_TYPE_CPU,
|
||||
* override this to VIRTUAL_GPU, otherwise Linux DRM interfaces
|
||||
* will take unexpected code paths to deal with "software" driver
|
||||
*/
|
||||
pProperties->deviceType = VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
void ResourceTracker::on_vkGetPhysicalDeviceFeatures2(void*, VkPhysicalDevice,
|
||||
VkPhysicalDeviceFeatures2* pFeatures) {
|
||||
|
|
@ -2048,7 +2074,8 @@ void ResourceTracker::on_vkGetPhysicalDeviceFeatures2KHR(void* context,
|
|||
on_vkGetPhysicalDeviceFeatures2(context, physicalDevice, pFeatures);
|
||||
}
|
||||
|
||||
void ResourceTracker::on_vkGetPhysicalDeviceProperties2(void*, VkPhysicalDevice,
|
||||
void ResourceTracker::on_vkGetPhysicalDeviceProperties2(void* context,
|
||||
VkPhysicalDevice physicalDevice,
|
||||
VkPhysicalDeviceProperties2* pProperties) {
|
||||
if (pProperties) {
|
||||
VkPhysicalDeviceDeviceMemoryReportFeaturesEXT* memoryReportFeaturesEXT =
|
||||
|
|
@ -2056,6 +2083,7 @@ void ResourceTracker::on_vkGetPhysicalDeviceProperties2(void*, VkPhysicalDevice,
|
|||
if (memoryReportFeaturesEXT) {
|
||||
memoryReportFeaturesEXT->deviceMemoryReport = VK_TRUE;
|
||||
}
|
||||
on_vkGetPhysicalDeviceProperties(context, physicalDevice, &pProperties->properties);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -2142,6 +2170,12 @@ void ResourceTracker::on_vkDestroyDevice_pre(void* context, VkDevice device,
|
|||
}
|
||||
}
|
||||
|
||||
#if defined(VK_USE_PLATFORM_ANDROID_KHR) || defined(__linux__)
|
||||
void updateMemoryTypeBits(uint32_t* memoryTypeBits, uint32_t memoryIndex) {
|
||||
*memoryTypeBits = 1u << memoryIndex;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
|
||||
VkResult ResourceTracker::on_vkGetAndroidHardwareBufferPropertiesANDROID(
|
||||
|
|
@ -2843,6 +2877,33 @@ VkResult ResourceTracker::on_vkGetBufferCollectionPropertiesFUCHSIA(
|
|||
}
|
||||
#endif
|
||||
|
||||
static uint32_t getVirglFormat(VkFormat vkFormat) {
|
||||
uint32_t virglFormat = 0;
|
||||
|
||||
switch (vkFormat) {
|
||||
case VK_FORMAT_R8G8B8A8_SINT:
|
||||
case VK_FORMAT_R8G8B8A8_UNORM:
|
||||
case VK_FORMAT_R8G8B8A8_SRGB:
|
||||
case VK_FORMAT_R8G8B8A8_SNORM:
|
||||
case VK_FORMAT_R8G8B8A8_SSCALED:
|
||||
case VK_FORMAT_R8G8B8A8_USCALED:
|
||||
virglFormat = VIRGL_FORMAT_R8G8B8A8_UNORM;
|
||||
break;
|
||||
case VK_FORMAT_B8G8R8A8_SINT:
|
||||
case VK_FORMAT_B8G8R8A8_UNORM:
|
||||
case VK_FORMAT_B8G8R8A8_SRGB:
|
||||
case VK_FORMAT_B8G8R8A8_SNORM:
|
||||
case VK_FORMAT_B8G8R8A8_SSCALED:
|
||||
case VK_FORMAT_B8G8R8A8_USCALED:
|
||||
virglFormat = VIRGL_FORMAT_B8G8R8A8_UNORM;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return virglFormat;
|
||||
}
|
||||
|
||||
CoherentMemoryPtr ResourceTracker::createCoherentMemory(
|
||||
VkDevice device, VkDeviceMemory mem, const VkMemoryAllocateInfo& hostAllocationInfo,
|
||||
VkEncoder* enc, VkResult& res) {
|
||||
|
|
@ -3221,6 +3282,13 @@ VkResult ResourceTracker::on_vkAllocateMemory(void* context, VkResult input_resu
|
|||
const void* importAhbInfoPtr = nullptr;
|
||||
#endif
|
||||
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
const VkImportMemoryFdInfoKHR* importFdInfoPtr =
|
||||
vk_find_struct<VkImportMemoryFdInfoKHR>(pAllocateInfo);
|
||||
#else
|
||||
const VkImportMemoryFdInfoKHR* importFdInfoPtr = nullptr;
|
||||
#endif
|
||||
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
const VkImportMemoryBufferCollectionFUCHSIA* importBufferCollectionInfoPtr =
|
||||
vk_find_struct<VkImportMemoryBufferCollectionFUCHSIA>(pAllocateInfo);
|
||||
|
|
@ -3268,9 +3336,11 @@ VkResult ResourceTracker::on_vkAllocateMemory(void* context, VkResult input_resu
|
|||
// State needed for import/export.
|
||||
bool exportAhb = false;
|
||||
bool exportVmo = false;
|
||||
bool exportDmabuf = false;
|
||||
bool importAhb = false;
|
||||
bool importBufferCollection = false;
|
||||
bool importVmo = false;
|
||||
bool importDmabuf = false;
|
||||
(void)exportVmo;
|
||||
|
||||
// Even if we export allocate, the underlying operation
|
||||
|
|
@ -3291,6 +3361,9 @@ VkResult ResourceTracker::on_vkAllocateMemory(void* context, VkResult input_resu
|
|||
exportVmo = exportAllocateInfoPtr->handleTypes &
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ZIRCON_VMO_BIT_FUCHSIA;
|
||||
#endif // VK_USE_PLATFORM_FUCHSIA
|
||||
exportDmabuf =
|
||||
exportAllocateInfoPtr->handleTypes & (VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT |
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_DMA_BUF_BIT_EXT);
|
||||
} else if (importAhbInfoPtr) {
|
||||
importAhb = true;
|
||||
} else if (importBufferCollectionInfoPtr) {
|
||||
|
|
@ -3298,7 +3371,13 @@ VkResult ResourceTracker::on_vkAllocateMemory(void* context, VkResult input_resu
|
|||
} else if (importVmoInfoPtr) {
|
||||
importVmo = true;
|
||||
}
|
||||
bool isImport = importAhb || importBufferCollection || importVmo;
|
||||
|
||||
if (importFdInfoPtr) {
|
||||
importDmabuf =
|
||||
(importFdInfoPtr->handleType & (VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT |
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_DMA_BUF_BIT_EXT));
|
||||
}
|
||||
bool isImport = importAhb || importBufferCollection || importVmo || importDmabuf;
|
||||
|
||||
#if defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
if (exportAhb) {
|
||||
|
|
@ -3671,7 +3750,96 @@ VkResult ResourceTracker::on_vkAllocateMemory(void* context, VkResult input_resu
|
|||
}
|
||||
#endif
|
||||
|
||||
if (ahw || !requestedMemoryIsHostVisible) {
|
||||
VirtGpuBlobPtr colorBufferBlob = nullptr;
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
if (exportDmabuf) {
|
||||
VirtGpuDevice* instance = VirtGpuDevice::getInstance();
|
||||
// // TODO: any special action for VK_STRUCTURE_TYPE_WSI_MEMORY_ALLOCATE_INFO_MESA? Can mark
|
||||
// special state if needed.
|
||||
// // const wsi_memory_allocate_info* wsiAllocateInfoPtr =
|
||||
// vk_find_struct<wsi_memory_allocate_info>(pAllocateInfo);
|
||||
bool hasDedicatedImage =
|
||||
dedicatedAllocInfoPtr && (dedicatedAllocInfoPtr->image != VK_NULL_HANDLE);
|
||||
bool hasDedicatedBuffer =
|
||||
dedicatedAllocInfoPtr && (dedicatedAllocInfoPtr->buffer != VK_NULL_HANDLE);
|
||||
if (!hasDedicatedImage && !hasDedicatedBuffer) {
|
||||
ALOGE(
|
||||
"%s: dma-buf exportable memory requires dedicated Image or Buffer information.\n");
|
||||
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
|
||||
}
|
||||
|
||||
if (hasDedicatedImage) {
|
||||
VkImageCreateInfo imageCreateInfo;
|
||||
{
|
||||
AutoLock<RecursiveLock> lock(mLock);
|
||||
|
||||
auto it = info_VkImage.find(dedicatedAllocInfoPtr->image);
|
||||
if (it == info_VkImage.end()) return VK_ERROR_INITIALIZATION_FAILED;
|
||||
const auto& imageInfo = it->second;
|
||||
|
||||
imageCreateInfo = imageInfo.createInfo;
|
||||
}
|
||||
uint32_t virglFormat = gfxstream::vk::getVirglFormat(imageCreateInfo.format);
|
||||
if (virglFormat < 0) {
|
||||
ALOGE("%s: Unsupported VK format for colorBuffer, vkFormat: 0x%x", __func__,
|
||||
imageCreateInfo.format);
|
||||
return VK_ERROR_FORMAT_NOT_SUPPORTED;
|
||||
}
|
||||
colorBufferBlob = instance->createVirglBlob(imageCreateInfo.extent.width,
|
||||
imageCreateInfo.extent.height, virglFormat);
|
||||
if (!colorBufferBlob) {
|
||||
ALOGE("%s: Failed to create colorBuffer resource for Image memory\n", __func__);
|
||||
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
|
||||
}
|
||||
if (0 != colorBufferBlob->wait()) {
|
||||
ALOGE("%s: Failed to wait for colorBuffer resource for Image memory\n", __func__);
|
||||
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
|
||||
}
|
||||
}
|
||||
|
||||
if (hasDedicatedBuffer) {
|
||||
VkBufferCreateInfo bufferCreateInfo;
|
||||
{
|
||||
AutoLock<RecursiveLock> lock(mLock);
|
||||
|
||||
auto it = info_VkBuffer.find(dedicatedAllocInfoPtr->buffer);
|
||||
if (it == info_VkBuffer.end()) return VK_ERROR_INITIALIZATION_FAILED;
|
||||
const auto& bufferInfo = it->second;
|
||||
bufferCreateInfo = bufferInfo.createInfo;
|
||||
}
|
||||
colorBufferBlob = instance->createVirglBlob(bufferCreateInfo.size / 4, 1,
|
||||
VIRGL_FORMAT_R8G8B8A8_UNORM);
|
||||
if (!colorBufferBlob) {
|
||||
ALOGE("%s: Failed to create colorBuffer resource for Buffer memory\n", __func__);
|
||||
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
|
||||
}
|
||||
if (0 != colorBufferBlob->wait()) {
|
||||
ALOGE("%s: Failed to wait for colorBuffer resource for Buffer memory\n", __func__);
|
||||
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (importDmabuf) {
|
||||
VirtGpuExternalHandle importHandle = {};
|
||||
importHandle.osHandle = importFdInfoPtr->fd;
|
||||
importHandle.type = kMemHandleDmabuf;
|
||||
|
||||
auto instance = VirtGpuDevice::getInstance();
|
||||
colorBufferBlob = instance->importBlob(importHandle);
|
||||
if (!colorBufferBlob) {
|
||||
ALOGE("%s: Failed to import colorBuffer resource\n", __func__);
|
||||
return VK_ERROR_OUT_OF_DEVICE_MEMORY;
|
||||
}
|
||||
}
|
||||
|
||||
if (colorBufferBlob) {
|
||||
importCbInfo.colorBuffer = colorBufferBlob->getResourceHandle();
|
||||
vk_append_struct(&structChainIter, &importCbInfo);
|
||||
}
|
||||
#endif
|
||||
|
||||
if (ahw || colorBufferBlob || !requestedMemoryIsHostVisible) {
|
||||
input_result =
|
||||
enc->vkAllocateMemory(device, &finalAllocInfo, pAllocator, pMemory, true /* do lock */);
|
||||
|
||||
|
|
@ -3679,7 +3847,7 @@ VkResult ResourceTracker::on_vkAllocateMemory(void* context, VkResult input_resu
|
|||
|
||||
VkDeviceSize allocationSize = finalAllocInfo.allocationSize;
|
||||
setDeviceMemoryInfo(device, *pMemory, 0, nullptr, finalAllocInfo.memoryTypeIndex, ahw,
|
||||
isImport, vmo_handle);
|
||||
isImport, vmo_handle, colorBufferBlob);
|
||||
|
||||
_RETURN_SCUCCESS_WITH_DEVICE_MEMORY_REPORT;
|
||||
}
|
||||
|
|
@ -3714,7 +3882,7 @@ VkResult ResourceTracker::on_vkAllocateMemory(void* context, VkResult input_resu
|
|||
|
||||
setDeviceMemoryInfo(device, *pMemory, finalAllocInfo.allocationSize,
|
||||
reinterpret_cast<uint8_t*>(addr), finalAllocInfo.memoryTypeIndex,
|
||||
/*ahw=*/nullptr, isImport, vmo_handle);
|
||||
/*ahw=*/nullptr, isImport, vmo_handle, /*blobPtr=*/nullptr);
|
||||
return VK_SUCCESS;
|
||||
}
|
||||
#endif
|
||||
|
|
@ -3857,11 +4025,11 @@ void ResourceTracker::transformImageMemoryRequirements2ForGuest(VkImage image,
|
|||
auto& info = it->second;
|
||||
|
||||
if (!info.external || !info.externalCreateInfo.handleTypes) {
|
||||
setMemoryRequirementsForSysmemBackedImage(image, &reqs2->memoryRequirements);
|
||||
transformImageMemoryRequirementsForGuestLocked(image, &reqs2->memoryRequirements);
|
||||
return;
|
||||
}
|
||||
|
||||
setMemoryRequirementsForSysmemBackedImage(image, &reqs2->memoryRequirements);
|
||||
transformImageMemoryRequirementsForGuestLocked(image, &reqs2->memoryRequirements);
|
||||
|
||||
VkMemoryDedicatedRequirements* dedicatedReqs =
|
||||
vk_find_struct<VkMemoryDedicatedRequirements>(reqs2);
|
||||
|
|
@ -3909,11 +4077,29 @@ VkResult ResourceTracker::on_vkCreateImage(void* context, VkResult, VkDevice dev
|
|||
|
||||
const VkExternalMemoryImageCreateInfo* extImgCiPtr =
|
||||
vk_find_struct<VkExternalMemoryImageCreateInfo>(pCreateInfo);
|
||||
|
||||
if (extImgCiPtr) {
|
||||
localExtImgCi = vk_make_orphan_copy(*extImgCiPtr);
|
||||
vk_append_struct(&structChainIter, &localExtImgCi);
|
||||
}
|
||||
|
||||
bool isWsiImage = false;
|
||||
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
if (extImgCiPtr &&
|
||||
(extImgCiPtr->handleTypes & VK_EXTERNAL_MEMORY_HANDLE_TYPE_DMA_BUF_BIT_EXT)) {
|
||||
// Assumes that handleType with DMA_BUF_BIT indicates creation of a
|
||||
// image for WSI use; no other external dma_buf usage is supported
|
||||
isWsiImage = true;
|
||||
// Must be linear. Otherwise querying stride and other properties
|
||||
// can be implementation-dependent.
|
||||
localCreateInfo.tiling = VK_IMAGE_TILING_LINEAR;
|
||||
if (gfxstream::vk::getVirglFormat(localCreateInfo.format) < 0) {
|
||||
localCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
VkNativeBufferANDROID localAnb;
|
||||
const VkNativeBufferANDROID* anbInfoPtr = vk_find_struct<VkNativeBufferANDROID>(pCreateInfo);
|
||||
|
|
@ -4085,13 +4271,16 @@ VkResult ResourceTracker::on_vkCreateImage(void* context, VkResult, VkDevice dev
|
|||
}
|
||||
#endif
|
||||
|
||||
info.isWsiImage = isWsiImage;
|
||||
|
||||
// Delete `protocolVersion` check goldfish drivers are gone.
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
#if defined(VK_USE_PLATFORM_ANDROID_KHR) || defined(__linux__)
|
||||
if (mCaps.vulkanCapset.colorBufferMemoryIndex == 0xFFFFFFFF) {
|
||||
mCaps.vulkanCapset.colorBufferMemoryIndex = getColorBufferMemoryIndex(context, device);
|
||||
}
|
||||
if (extImgCiPtr && (extImgCiPtr->handleTypes &
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID)) {
|
||||
if (isWsiImage ||
|
||||
(extImgCiPtr && (extImgCiPtr->handleTypes &
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID))) {
|
||||
updateMemoryTypeBits(&memReqs.memoryTypeBits, mCaps.vulkanCapset.colorBufferMemoryIndex);
|
||||
}
|
||||
#endif
|
||||
|
|
@ -4981,7 +5170,6 @@ VkResult ResourceTracker::on_vkCreateBuffer(void* context, VkResult, VkDevice de
|
|||
vk_append_struct(&structChainIter, &localExtBufCi);
|
||||
}
|
||||
|
||||
|
||||
VkBufferOpaqueCaptureAddressCreateInfo localCapAddrCi;
|
||||
const VkBufferOpaqueCaptureAddressCreateInfo* pCapAddrCi =
|
||||
vk_find_struct<VkBufferOpaqueCaptureAddressCreateInfo>(pCreateInfo);
|
||||
|
|
@ -5057,12 +5245,14 @@ VkResult ResourceTracker::on_vkCreateBuffer(void* context, VkResult, VkDevice de
|
|||
|
||||
if (res != VK_SUCCESS) return res;
|
||||
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
#if defined(VK_USE_PLATFORM_ANDROID_KHR) || defined(__linux__)
|
||||
if (mCaps.vulkanCapset.colorBufferMemoryIndex == 0xFFFFFFFF) {
|
||||
mCaps.vulkanCapset.colorBufferMemoryIndex = getColorBufferMemoryIndex(context, device);
|
||||
}
|
||||
if (extBufCiPtr && (extBufCiPtr->handleTypes &
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID)) {
|
||||
if (extBufCiPtr &&
|
||||
((extBufCiPtr->handleTypes &
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID) ||
|
||||
(extBufCiPtr->handleTypes & VK_EXTERNAL_MEMORY_HANDLE_TYPE_DMA_BUF_BIT_EXT))) {
|
||||
updateMemoryTypeBits(&memReqs.memoryTypeBits, mCaps.vulkanCapset.colorBufferMemoryIndex);
|
||||
}
|
||||
#endif
|
||||
|
|
@ -5354,6 +5544,53 @@ VkResult ResourceTracker::on_vkImportSemaphoreFdKHR(
|
|||
#endif
|
||||
}
|
||||
|
||||
VkResult ResourceTracker::on_vkGetMemoryFdKHR(void* context, VkResult, VkDevice device,
|
||||
const VkMemoryGetFdInfoKHR* pGetFdInfo, int* pFd) {
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
if (!pGetFdInfo) return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
if (!pGetFdInfo->memory) return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
|
||||
if (!(pGetFdInfo->handleType & (VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT |
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_DMA_BUF_BIT_EXT))) {
|
||||
ALOGE("%s: Export operation not defined for handleType: 0x%x\n", __func__,
|
||||
pGetFdInfo->handleType);
|
||||
return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
}
|
||||
// Sanity-check device
|
||||
AutoLock<RecursiveLock> lock(mLock);
|
||||
auto deviceIt = info_VkDevice.find(device);
|
||||
if (deviceIt == info_VkDevice.end()) {
|
||||
return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
}
|
||||
|
||||
auto deviceMemIt = info_VkDeviceMemory.find(pGetFdInfo->memory);
|
||||
if (deviceMemIt == info_VkDeviceMemory.end()) {
|
||||
return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
}
|
||||
auto& info = deviceMemIt->second;
|
||||
|
||||
if (!info.blobPtr) {
|
||||
ALOGE("%s: VkDeviceMemory does not have a resource available for export.\n", __func__);
|
||||
return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
}
|
||||
|
||||
VirtGpuExternalHandle handle{};
|
||||
int ret = info.blobPtr->exportBlob(handle);
|
||||
if (ret != 0 || handle.osHandle < 0) {
|
||||
ALOGE("%s: Failed to export host resource to FD.\n", __func__);
|
||||
return VK_ERROR_OUT_OF_HOST_MEMORY;
|
||||
}
|
||||
*pFd = handle.osHandle;
|
||||
return VK_SUCCESS;
|
||||
#else
|
||||
(void)context;
|
||||
(void)device;
|
||||
(void)pGetFdInfo;
|
||||
(void)pFd;
|
||||
return VK_ERROR_INCOMPATIBLE_DRIVER;
|
||||
#endif
|
||||
}
|
||||
|
||||
void ResourceTracker::flushCommandBufferPendingCommandsBottomUp(
|
||||
void* context, VkQueue queue, const std::vector<VkCommandBuffer>& workingSet) {
|
||||
if (workingSet.empty()) return;
|
||||
|
|
@ -5704,9 +5941,14 @@ void ResourceTracker::unwrap_VkNativeBufferANDROID(const VkNativeBufferANDROID*
|
|||
}
|
||||
|
||||
auto* gralloc = ResourceTracker::threadingCallbacks.hostConnectionGetFunc()->grallocHelper();
|
||||
const native_handle_t* nativeHandle = (const native_handle_t*)inputNativeInfo->handle;
|
||||
|
||||
*(uint32_t*)(outputNativeInfo->handle) =
|
||||
gralloc->getHostHandle((const native_handle_t*)inputNativeInfo->handle);
|
||||
#if defined(END2END_TESTS)
|
||||
// This is valid since the testing backend creates the handle and we know the layout.
|
||||
*(uint32_t*)(outputNativeInfo->handle) = (uint32_t)nativeHandle->data[0];
|
||||
#else
|
||||
*(uint32_t*)(outputNativeInfo->handle) = gralloc->getHostHandle(nativeHandle);
|
||||
#endif
|
||||
}
|
||||
|
||||
void ResourceTracker::unwrap_VkBindImageMemorySwapchainInfoKHR(
|
||||
|
|
@ -6040,6 +6282,14 @@ void ResourceTracker::on_vkUpdateDescriptorSetWithTemplate(
|
|||
|
||||
memcpy(((uint8_t*)imageInfos) + currImageInfoOffset, user,
|
||||
sizeof(VkDescriptorImageInfo));
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
// Convert mesa to internal for objects in the user buffer
|
||||
VkDescriptorImageInfo* internalImageInfo =
|
||||
(VkDescriptorImageInfo*)(((uint8_t*)imageInfos) + currImageInfoOffset);
|
||||
VK_FROM_HANDLE(gfxstream_vk_image_view, gfxstream_image_view,
|
||||
internalImageInfo->imageView);
|
||||
internalImageInfo->imageView = gfxstream_image_view->internal_object;
|
||||
#endif
|
||||
currImageInfoOffset += sizeof(VkDescriptorImageInfo);
|
||||
}
|
||||
|
||||
|
|
@ -6059,6 +6309,13 @@ void ResourceTracker::on_vkUpdateDescriptorSetWithTemplate(
|
|||
|
||||
memcpy(((uint8_t*)bufferInfos) + currBufferInfoOffset, user,
|
||||
sizeof(VkDescriptorBufferInfo));
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
// Convert mesa to internal for objects in the user buffer
|
||||
VkDescriptorBufferInfo* internalBufferInfo =
|
||||
(VkDescriptorBufferInfo*)(((uint8_t*)bufferInfos) + currBufferInfoOffset);
|
||||
VK_FROM_HANDLE(gfxstream_vk_buffer, gfxstream_buffer, internalBufferInfo->buffer);
|
||||
internalBufferInfo->buffer = gfxstream_buffer->internal_object;
|
||||
#endif
|
||||
currBufferInfoOffset += sizeof(VkDescriptorBufferInfo);
|
||||
}
|
||||
|
||||
|
|
@ -6077,6 +6334,15 @@ void ResourceTracker::on_vkUpdateDescriptorSetWithTemplate(
|
|||
const VkBufferView* user = (const VkBufferView*)(userBuffer + offset + j * stride);
|
||||
|
||||
memcpy(((uint8_t*)bufferViews) + currBufferViewOffset, user, sizeof(VkBufferView));
|
||||
#if defined(__linux__) && !defined(VK_USE_PLATFORM_ANDROID_KHR)
|
||||
// Convert mesa to internal for objects in the user buffer
|
||||
VkBufferView* internalBufferView =
|
||||
(VkBufferView*)(((uint8_t*)bufferViews) + currBufferViewOffset);
|
||||
VK_FROM_HANDLE(gfxstream_vk_buffer_view, gfxstream_buffer_view,
|
||||
*internalBufferView);
|
||||
*internalBufferView = gfxstream_buffer_view->internal_object;
|
||||
#endif
|
||||
|
||||
currBufferViewOffset += sizeof(VkBufferView);
|
||||
}
|
||||
|
||||
|
|
@ -6139,7 +6405,6 @@ VkResult ResourceTracker::on_vkGetPhysicalDeviceImageFormatProperties2_common(
|
|||
VK_FORMAT_R8G8_SSCALED, VK_FORMAT_R8G8_SRGB,
|
||||
};
|
||||
|
||||
|
||||
if (ext_img_properties) {
|
||||
if (std::find(std::begin(kExternalImageSupportedFormats),
|
||||
std::end(kExternalImageSupportedFormats),
|
||||
|
|
@ -6154,7 +6419,7 @@ VkResult ResourceTracker::on_vkGetPhysicalDeviceImageFormatProperties2_common(
|
|||
VkAndroidHardwareBufferUsageANDROID* output_ahw_usage =
|
||||
vk_find_struct<VkAndroidHardwareBufferUsageANDROID>(pImageFormatProperties);
|
||||
supportedHandleType |= VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT |
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID;
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID;
|
||||
#endif
|
||||
const VkPhysicalDeviceExternalImageFormatInfo* ext_img_info =
|
||||
vk_find_struct<VkPhysicalDeviceExternalImageFormatInfo>(pImageFormatInfo);
|
||||
|
|
@ -6201,7 +6466,8 @@ VkResult ResourceTracker::on_vkGetPhysicalDeviceImageFormatProperties2_common(
|
|||
}
|
||||
#endif
|
||||
if (ext_img_properties) {
|
||||
transformImpl_VkExternalMemoryProperties_fromhost(&ext_img_properties->externalMemoryProperties, 0);
|
||||
transformImpl_VkExternalMemoryProperties_fromhost(
|
||||
&ext_img_properties->externalMemoryProperties, 0);
|
||||
}
|
||||
return hostRes;
|
||||
}
|
||||
|
|
@ -6236,11 +6502,12 @@ void ResourceTracker::on_vkGetPhysicalDeviceExternalBufferProperties_common(
|
|||
#endif
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
supportedHandleType |= VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT |
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID;
|
||||
VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID;
|
||||
#endif
|
||||
if (supportedHandleType) {
|
||||
// 0 is a valid handleType so we can't check against 0
|
||||
if (pExternalBufferInfo->handleType != (pExternalBufferInfo->handleType & supportedHandleType)) {
|
||||
if (pExternalBufferInfo->handleType !=
|
||||
(pExternalBufferInfo->handleType & supportedHandleType)) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
|
@ -6252,7 +6519,8 @@ void ResourceTracker::on_vkGetPhysicalDeviceExternalBufferProperties_common(
|
|||
enc->vkGetPhysicalDeviceExternalBufferProperties(
|
||||
physicalDevice, pExternalBufferInfo, pExternalBufferProperties, true /* do lock */);
|
||||
}
|
||||
transformImpl_VkExternalMemoryProperties_fromhost(&pExternalBufferProperties->externalMemoryProperties, 0);
|
||||
transformImpl_VkExternalMemoryProperties_fromhost(
|
||||
&pExternalBufferProperties->externalMemoryProperties, 0);
|
||||
}
|
||||
|
||||
void ResourceTracker::on_vkGetPhysicalDeviceExternalBufferProperties(
|
||||
|
|
@ -6261,8 +6529,7 @@ void ResourceTracker::on_vkGetPhysicalDeviceExternalBufferProperties(
|
|||
VkExternalBufferProperties* pExternalBufferProperties) {
|
||||
return on_vkGetPhysicalDeviceExternalBufferProperties_common(
|
||||
false /* not KHR */, context, physicalDevice, pExternalBufferInfo,
|
||||
pExternalBufferProperties
|
||||
);
|
||||
pExternalBufferProperties);
|
||||
}
|
||||
|
||||
void ResourceTracker::on_vkGetPhysicalDeviceExternalBufferPropertiesKHR(
|
||||
|
|
@ -6270,9 +6537,7 @@ void ResourceTracker::on_vkGetPhysicalDeviceExternalBufferPropertiesKHR(
|
|||
const VkPhysicalDeviceExternalBufferInfoKHR* pExternalBufferInfo,
|
||||
VkExternalBufferPropertiesKHR* pExternalBufferProperties) {
|
||||
return on_vkGetPhysicalDeviceExternalBufferProperties_common(
|
||||
true /* is KHR */, context, physicalDevice, pExternalBufferInfo,
|
||||
pExternalBufferProperties
|
||||
);
|
||||
true /* is KHR */, context, physicalDevice, pExternalBufferInfo, pExternalBufferProperties);
|
||||
}
|
||||
|
||||
void ResourceTracker::on_vkGetPhysicalDeviceExternalSemaphoreProperties(
|
||||
|
|
@ -6776,8 +7041,10 @@ VkResult ResourceTracker::on_vkCreateGraphicsPipelines(
|
|||
vk_find_struct<VkPipelineRenderingCreateInfo>(&graphicsPipelineCreateInfo);
|
||||
|
||||
if (pipelineRenderingInfo) {
|
||||
forceDepthStencilState |= pipelineRenderingInfo->depthAttachmentFormat != VK_FORMAT_UNDEFINED;
|
||||
forceDepthStencilState |= pipelineRenderingInfo->stencilAttachmentFormat != VK_FORMAT_UNDEFINED;
|
||||
forceDepthStencilState |=
|
||||
pipelineRenderingInfo->depthAttachmentFormat != VK_FORMAT_UNDEFINED;
|
||||
forceDepthStencilState |=
|
||||
pipelineRenderingInfo->stencilAttachmentFormat != VK_FORMAT_UNDEFINED;
|
||||
forceColorBlendState |= pipelineRenderingInfo->colorAttachmentCount != 0;
|
||||
}
|
||||
|
||||
|
|
@ -6998,7 +7265,8 @@ ResourceTracker* ResourceTracker::get() {
|
|||
}
|
||||
|
||||
// static
|
||||
ALWAYS_INLINE VkEncoder* ResourceTracker::getCommandBufferEncoder(VkCommandBuffer commandBuffer) {
|
||||
ALWAYS_INLINE_GFXSTREAM VkEncoder* ResourceTracker::getCommandBufferEncoder(
|
||||
VkCommandBuffer commandBuffer) {
|
||||
if (!(ResourceTracker::streamFeatureBits &
|
||||
VULKAN_STREAM_FEATURE_QUEUE_SUBMIT_WITH_COMMANDS_BIT)) {
|
||||
auto enc = ResourceTracker::getThreadLocalEncoder();
|
||||
|
|
@ -7019,7 +7287,7 @@ ALWAYS_INLINE VkEncoder* ResourceTracker::getCommandBufferEncoder(VkCommandBuffe
|
|||
}
|
||||
|
||||
// static
|
||||
ALWAYS_INLINE VkEncoder* ResourceTracker::getQueueEncoder(VkQueue queue) {
|
||||
ALWAYS_INLINE_GFXSTREAM VkEncoder* ResourceTracker::getQueueEncoder(VkQueue queue) {
|
||||
auto enc = ResourceTracker::getThreadLocalEncoder();
|
||||
if (!(ResourceTracker::streamFeatureBits &
|
||||
VULKAN_STREAM_FEATURE_QUEUE_SUBMIT_WITH_COMMANDS_BIT)) {
|
||||
|
|
@ -7029,7 +7297,7 @@ ALWAYS_INLINE VkEncoder* ResourceTracker::getQueueEncoder(VkQueue queue) {
|
|||
}
|
||||
|
||||
// static
|
||||
ALWAYS_INLINE VkEncoder* ResourceTracker::getThreadLocalEncoder() {
|
||||
ALWAYS_INLINE_GFXSTREAM VkEncoder* ResourceTracker::getThreadLocalEncoder() {
|
||||
auto hostConn = ResourceTracker::threadingCallbacks.hostConnectionGetFunc();
|
||||
auto vkEncoder = ResourceTracker::threadingCallbacks.vkEncoderGetFunc(hostConn);
|
||||
return vkEncoder;
|
||||
|
|
@ -7039,13 +7307,13 @@ ALWAYS_INLINE VkEncoder* ResourceTracker::getThreadLocalEncoder() {
|
|||
void ResourceTracker::setSeqnoPtr(uint32_t* seqnoptr) { sSeqnoPtr = seqnoptr; }
|
||||
|
||||
// static
|
||||
ALWAYS_INLINE uint32_t ResourceTracker::nextSeqno() {
|
||||
ALWAYS_INLINE_GFXSTREAM uint32_t ResourceTracker::nextSeqno() {
|
||||
uint32_t res = __atomic_add_fetch(sSeqnoPtr, 1, __ATOMIC_SEQ_CST);
|
||||
return res;
|
||||
}
|
||||
|
||||
// static
|
||||
ALWAYS_INLINE uint32_t ResourceTracker::getSeqno() {
|
||||
ALWAYS_INLINE_GFXSTREAM uint32_t ResourceTracker::getSeqno() {
|
||||
uint32_t res = __atomic_load_n(sSeqnoPtr, __ATOMIC_SEQ_CST);
|
||||
return res;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -273,6 +273,9 @@ class ResourceTracker {
|
|||
const VkBindImageMemoryInfo* inputBindInfos,
|
||||
VkBindImageMemoryInfo* outputBindInfos);
|
||||
|
||||
VkResult on_vkGetMemoryFdKHR(void* context, VkResult input_result, VkDevice device,
|
||||
const VkMemoryGetFdInfoKHR* pGetFdInfo, int* pFd);
|
||||
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
VkResult on_vkGetMemoryZirconHandleFUCHSIA(void* context, VkResult input_result,
|
||||
VkDevice device,
|
||||
|
|
@ -541,9 +544,9 @@ class ResourceTracker {
|
|||
void resetCommandPoolStagingInfo(VkCommandPool commandPool);
|
||||
|
||||
#ifdef __GNUC__
|
||||
#define ALWAYS_INLINE
|
||||
#define ALWAYS_INLINE_GFXSTREAM
|
||||
#elif
|
||||
#define ALWAYS_INLINE __attribute__((always_inline))
|
||||
#define ALWAYS_INLINE_GFXSTREAM __attribute__((always_inline))
|
||||
#endif
|
||||
|
||||
static VkEncoder* getCommandBufferEncoder(VkCommandBuffer commandBuffer);
|
||||
|
|
@ -551,8 +554,8 @@ class ResourceTracker {
|
|||
static VkEncoder* getThreadLocalEncoder();
|
||||
|
||||
static void setSeqnoPtr(uint32_t* seqnoptr);
|
||||
static ALWAYS_INLINE uint32_t nextSeqno();
|
||||
static ALWAYS_INLINE uint32_t getSeqno();
|
||||
static ALWAYS_INLINE_GFXSTREAM uint32_t nextSeqno();
|
||||
static ALWAYS_INLINE_GFXSTREAM uint32_t getSeqno();
|
||||
|
||||
// Transforms
|
||||
void deviceMemoryTransform_tohost(VkDeviceMemory* memory, uint32_t memoryCount,
|
||||
|
|
@ -614,7 +617,7 @@ class ResourceTracker {
|
|||
|
||||
void setDeviceMemoryInfo(VkDevice device, VkDeviceMemory memory, VkDeviceSize allocationSize,
|
||||
uint8_t* ptr, uint32_t memoryTypeIndex, AHardwareBuffer* ahw,
|
||||
bool imported, zx_handle_t vmoHandle);
|
||||
bool imported, zx_handle_t vmoHandle, VirtGpuBlobPtr blobPtr);
|
||||
|
||||
void setImageInfo(VkImage image, VkDevice device, const VkImageCreateInfo* pCreateInfo);
|
||||
|
||||
|
|
@ -680,11 +683,6 @@ class ResourceTracker {
|
|||
VkBindImageMemorySwapchainInfoKHR* outputBimsi);
|
||||
#endif
|
||||
|
||||
void setMemoryRequirementsForSysmemBackedImage(VkImage image,
|
||||
VkMemoryRequirements* pMemoryRequirements);
|
||||
|
||||
void transformImageMemoryRequirementsForGuestLocked(VkImage image, VkMemoryRequirements* reqs);
|
||||
|
||||
#if defined(VK_USE_PLATFORM_FUCHSIA)
|
||||
VkResult getBufferCollectionImageCreateInfoIndexLocked(
|
||||
VkBufferCollectionFUCHSIA collection, fuchsia_sysmem::wire::BufferCollectionInfo2& info,
|
||||
|
|
@ -722,16 +720,17 @@ class ResourceTracker {
|
|||
VkPhysicalDeviceMemoryProperties memProps;
|
||||
uint32_t apiVersion;
|
||||
std::set<std::string> enabledExtensions;
|
||||
std::vector<std::pair<PFN_vkDeviceMemoryReportCallbackEXT, void*>> deviceMemoryReportCallbacks;
|
||||
std::vector<std::pair<PFN_vkDeviceMemoryReportCallbackEXT, void*>>
|
||||
deviceMemoryReportCallbacks;
|
||||
};
|
||||
|
||||
struct VkDeviceMemory_Info {
|
||||
bool dedicated = false;
|
||||
bool imported = false;
|
||||
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
AHardwareBuffer* ahw = nullptr;
|
||||
#endif
|
||||
#endif
|
||||
zx_handle_t vmoHandle = ZX_HANDLE_INVALID;
|
||||
VkDevice device;
|
||||
|
||||
|
|
@ -743,10 +742,11 @@ class ResourceTracker {
|
|||
uint64_t coherentMemorySize = 0;
|
||||
uint64_t coherentMemoryOffset = 0;
|
||||
|
||||
#if defined(__ANDROID__)
|
||||
#if defined(__ANDROID__)
|
||||
GoldfishAddressSpaceBlockPtr goldfishBlock = nullptr;
|
||||
#endif // defined(__ANDROID__)
|
||||
#endif // defined(__ANDROID__)
|
||||
CoherentMemoryPtr coherentMemory = nullptr;
|
||||
VirtGpuBlobPtr blobPtr = nullptr;
|
||||
};
|
||||
|
||||
struct VkCommandBuffer_Info {
|
||||
|
|
@ -768,14 +768,15 @@ class ResourceTracker {
|
|||
VkDeviceSize currentBackingSize = 0;
|
||||
bool baseRequirementsKnown = false;
|
||||
VkMemoryRequirements baseRequirements;
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
bool hasExternalFormat = false;
|
||||
unsigned androidFormat = 0;
|
||||
std::vector<int> pendingQsriSyncFds;
|
||||
#endif
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
#endif
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
bool isSysmemBackedMemory = false;
|
||||
#endif
|
||||
#endif
|
||||
bool isWsiImage = false;
|
||||
};
|
||||
|
||||
struct VkBuffer_Info {
|
||||
|
|
@ -788,9 +789,9 @@ class ResourceTracker {
|
|||
VkDeviceSize currentBackingSize = 0;
|
||||
bool baseRequirementsKnown = false;
|
||||
VkMemoryRequirements baseRequirements;
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
bool isSysmemBackedMemory = false;
|
||||
#endif
|
||||
#endif
|
||||
};
|
||||
|
||||
struct VkSemaphore_Info {
|
||||
|
|
@ -822,9 +823,9 @@ class ResourceTracker {
|
|||
VkDevice device;
|
||||
bool external = false;
|
||||
VkExportFenceCreateInfo exportFenceCreateInfo;
|
||||
#if defined(VK_USE_PLATFORM_ANDROID_KHR) || defined(__linux__)
|
||||
#if defined(VK_USE_PLATFORM_ANDROID_KHR) || defined(__linux__)
|
||||
int syncFd = -1;
|
||||
#endif
|
||||
#endif
|
||||
};
|
||||
|
||||
struct VkDescriptorPool_Info {
|
||||
|
|
@ -848,26 +849,23 @@ class ResourceTracker {
|
|||
};
|
||||
|
||||
struct VkBufferCollectionFUCHSIA_Info {
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
gfxstream::guest::Optional<fuchsia_sysmem::wire::BufferCollectionConstraints> constraints;
|
||||
gfxstream::guest::Optional<VkBufferCollectionPropertiesFUCHSIA> properties;
|
||||
|
||||
// the index of corresponding createInfo for each image format
|
||||
// constraints in |constraints|.
|
||||
std::vector<uint32_t> createInfoIndex;
|
||||
#endif // VK_USE_PLATFORM_FUCHSIA
|
||||
#endif // VK_USE_PLATFORM_FUCHSIA
|
||||
};
|
||||
|
||||
VkDescriptorImageInfo filterNonexistentSampler(const VkDescriptorImageInfo& inputInfo);
|
||||
|
||||
void emitDeviceMemoryReport(VkDevice_Info info,
|
||||
VkDeviceMemoryReportEventTypeEXT type,
|
||||
uint64_t memoryObjectId,
|
||||
VkDeviceSize size,
|
||||
VkObjectType objectType,
|
||||
uint64_t objectHandle,
|
||||
uint32_t heapIndex = 0);
|
||||
void emitDeviceMemoryReport(VkDevice_Info info, VkDeviceMemoryReportEventTypeEXT type,
|
||||
uint64_t memoryObjectId, VkDeviceSize size, VkObjectType objectType,
|
||||
uint64_t objectHandle, uint32_t heapIndex = 0);
|
||||
|
||||
void transformImageMemoryRequirementsForGuestLocked(VkImage image, VkMemoryRequirements* reqs);
|
||||
CoherentMemoryPtr freeCoherentMemoryLocked(VkDeviceMemory memory, VkDeviceMemory_Info& info);
|
||||
|
||||
mutable RecursiveLock mLock;
|
||||
|
|
@ -893,10 +891,9 @@ class ResourceTracker {
|
|||
fidl::WireSyncClient<fuchsia_sysmem::Allocator> mSysmemAllocator;
|
||||
#endif
|
||||
|
||||
#define HANDLE_REGISTER_DECLARATION(type) \
|
||||
std::unordered_map<type, type##_Info> info_##type;
|
||||
#define HANDLE_REGISTER_DECLARATION(type) std::unordered_map<type, type##_Info> info_##type;
|
||||
|
||||
GOLDFISH_VK_LIST_HANDLE_TYPES(HANDLE_REGISTER_DECLARATION)
|
||||
GOLDFISH_VK_LIST_HANDLE_TYPES(HANDLE_REGISTER_DECLARATION)
|
||||
|
||||
WorkPool mWorkPool{4};
|
||||
std::unordered_map<VkQueue, std::vector<WorkPool::WaitGroupHandle>>
|
||||
|
|
|
|||
|
|
@ -19,10 +19,10 @@
|
|||
#define GOLDFISH_VK_OBJECT_DEBUG 0
|
||||
|
||||
#if GOLDFISH_VK_OBJECT_DEBUG
|
||||
#define D(fmt,...) ALOGD("%s: " fmt, __func__, ##__VA_ARGS__);
|
||||
#define D(fmt, ...) ALOGD("%s: " fmt, __func__, ##__VA_ARGS__);
|
||||
#else
|
||||
#ifndef D
|
||||
#define D(fmt,...)
|
||||
#define D(fmt, ...)
|
||||
#endif
|
||||
#endif
|
||||
|
||||
|
|
@ -36,105 +36,103 @@ extern "C" {
|
|||
#define SET_HWVULKAN_DISPATCH_MAGIC
|
||||
#endif
|
||||
|
||||
#define GOLDFISH_VK_NEW_DISPATCHABLE_FROM_HOST_IMPL(type) \
|
||||
type new_from_host_##type(type underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
#define GOLDFISH_VK_NEW_DISPATCHABLE_FROM_HOST_IMPL(type) \
|
||||
type new_from_host_##type(type underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
static_cast<goldfish_##type*>(malloc(sizeof(goldfish_##type))); \
|
||||
if (!res) { \
|
||||
ALOGE("FATAL: Failed to alloc " #type " handle"); \
|
||||
abort(); \
|
||||
} \
|
||||
SET_HWVULKAN_DISPATCH_MAGIC \
|
||||
res->underlying = (uint64_t)underlying; \
|
||||
res->lastUsedEncoder = nullptr; \
|
||||
res->sequenceNumber = 0; \
|
||||
res->privateEncoder = 0; \
|
||||
res->privateStream = 0; \
|
||||
res->flags = 0; \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
} \
|
||||
if (!res) { \
|
||||
ALOGE("FATAL: Failed to alloc " #type " handle"); \
|
||||
abort(); \
|
||||
} \
|
||||
SET_HWVULKAN_DISPATCH_MAGIC \
|
||||
res->underlying = (uint64_t)underlying; \
|
||||
res->lastUsedEncoder = nullptr; \
|
||||
res->sequenceNumber = 0; \
|
||||
res->privateEncoder = 0; \
|
||||
res->privateStream = 0; \
|
||||
res->flags = 0; \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
}
|
||||
|
||||
#define GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_IMPL(type) \
|
||||
type new_from_host_##type(type underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
#define GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_IMPL(type) \
|
||||
type new_from_host_##type(type underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
static_cast<goldfish_##type*>(malloc(sizeof(goldfish_##type))); \
|
||||
res->underlying = (uint64_t)underlying; \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
} \
|
||||
res->underlying = (uint64_t)underlying; \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
}
|
||||
|
||||
#define GOLDFISH_VK_AS_GOLDFISH_IMPL(type) \
|
||||
#define GOLDFISH_VK_AS_GOLDFISH_IMPL(type) \
|
||||
struct goldfish_##type* as_goldfish_##type(type toCast) { \
|
||||
return reinterpret_cast<goldfish_##type*>(toCast); \
|
||||
} \
|
||||
return reinterpret_cast<goldfish_##type*>(toCast); \
|
||||
}
|
||||
|
||||
#define GOLDFISH_VK_GET_HOST_IMPL(type) \
|
||||
type get_host_##type(type toUnwrap) { \
|
||||
if (!toUnwrap) return VK_NULL_HANDLE; \
|
||||
#define GOLDFISH_VK_GET_HOST_IMPL(type) \
|
||||
type get_host_##type(type toUnwrap) { \
|
||||
if (!toUnwrap) return VK_NULL_HANDLE; \
|
||||
auto as_goldfish = as_goldfish_##type(toUnwrap); \
|
||||
return (type)(as_goldfish->underlying); \
|
||||
} \
|
||||
return (type)(as_goldfish->underlying); \
|
||||
}
|
||||
|
||||
#define GOLDFISH_VK_DELETE_GOLDFISH_IMPL(type) \
|
||||
#define GOLDFISH_VK_DELETE_GOLDFISH_IMPL(type) \
|
||||
void delete_goldfish_##type(type toDelete) { \
|
||||
D("guest %p", toDelete); \
|
||||
free(as_goldfish_##type(toDelete)); \
|
||||
} \
|
||||
D("guest %p", toDelete); \
|
||||
free(as_goldfish_##type(toDelete)); \
|
||||
}
|
||||
|
||||
#define GOLDFISH_VK_IDENTITY_IMPL(type) \
|
||||
type vk_handle_identity_##type(type handle) { \
|
||||
return handle; \
|
||||
} \
|
||||
type vk_handle_identity_##type(type handle) { return handle; }
|
||||
|
||||
#define GOLDFISH_VK_NEW_DISPATCHABLE_FROM_HOST_U64_IMPL(type) \
|
||||
type new_from_host_u64_##type(uint64_t underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
#define GOLDFISH_VK_NEW_DISPATCHABLE_FROM_HOST_U64_IMPL(type) \
|
||||
type new_from_host_u64_##type(uint64_t underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
static_cast<goldfish_##type*>(malloc(sizeof(goldfish_##type))); \
|
||||
if (!res) { \
|
||||
ALOGE("FATAL: Failed to alloc " #type " handle"); \
|
||||
abort(); \
|
||||
} \
|
||||
SET_HWVULKAN_DISPATCH_MAGIC \
|
||||
res->underlying = underlying; \
|
||||
res->lastUsedEncoder = nullptr; \
|
||||
res->sequenceNumber = 0; \
|
||||
res->privateEncoder = 0; \
|
||||
res->privateStream = 0; \
|
||||
res->flags = 0; \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
} \
|
||||
if (!res) { \
|
||||
ALOGE("FATAL: Failed to alloc " #type " handle"); \
|
||||
abort(); \
|
||||
} \
|
||||
SET_HWVULKAN_DISPATCH_MAGIC \
|
||||
res->underlying = underlying; \
|
||||
res->lastUsedEncoder = nullptr; \
|
||||
res->sequenceNumber = 0; \
|
||||
res->privateEncoder = 0; \
|
||||
res->privateStream = 0; \
|
||||
res->flags = 0; \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
}
|
||||
|
||||
#define GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_U64_IMPL(type) \
|
||||
type new_from_host_u64_##type(uint64_t underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
static_cast<goldfish_##type*>(malloc(sizeof(goldfish_##type))); \
|
||||
res->underlying = underlying; \
|
||||
#define GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_U64_IMPL(type) \
|
||||
type new_from_host_u64_##type(uint64_t underlying) { \
|
||||
struct goldfish_##type* res = \
|
||||
static_cast<goldfish_##type*>(malloc(sizeof(goldfish_##type))); \
|
||||
res->underlying = underlying; \
|
||||
D("guest %p: host u64: 0x%llx", res, (unsigned long long)res->underlying); \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
} \
|
||||
res->poolObjects = 0; \
|
||||
res->subObjects = 0; \
|
||||
res->superObjects = 0; \
|
||||
res->userPtr = 0; \
|
||||
return reinterpret_cast<type>(res); \
|
||||
}
|
||||
|
||||
#define GOLDFISH_VK_GET_HOST_U64_IMPL(type) \
|
||||
uint64_t get_host_u64_##type(type toUnwrap) { \
|
||||
if (!toUnwrap) return 0; \
|
||||
auto as_goldfish = as_goldfish_##type(toUnwrap); \
|
||||
#define GOLDFISH_VK_GET_HOST_U64_IMPL(type) \
|
||||
uint64_t get_host_u64_##type(type toUnwrap) { \
|
||||
if (!toUnwrap) return 0; \
|
||||
auto as_goldfish = as_goldfish_##type(toUnwrap); \
|
||||
D("guest %p: host u64: 0x%llx", toUnwrap, (unsigned long long)as_goldfish->underlying); \
|
||||
return as_goldfish->underlying; \
|
||||
} \
|
||||
return as_goldfish->underlying; \
|
||||
}
|
||||
|
||||
GOLDFISH_VK_LIST_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_NEW_DISPATCHABLE_FROM_HOST_IMPL)
|
||||
GOLDFISH_VK_LIST_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_AS_GOLDFISH_IMPL)
|
||||
|
|
@ -148,8 +146,10 @@ GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_AS_GOLDFISH_IMPL)
|
|||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_GET_HOST_IMPL)
|
||||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_IDENTITY_IMPL)
|
||||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_GET_HOST_U64_IMPL)
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_IMPL)
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_U64_IMPL)
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(
|
||||
GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_IMPL)
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(
|
||||
GOLDFISH_VK_NEW_TRIVIAL_NON_DISPATCHABLE_FROM_HOST_U64_IMPL)
|
||||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_DELETE_GOLDFISH_IMPL)
|
||||
|
||||
VkDescriptorPool new_from_host_VkDescriptorPool(VkDescriptorPool underlying) {
|
||||
|
|
@ -177,8 +177,8 @@ VkDescriptorSet new_from_host_u64_VkDescriptorSet(uint64_t underlying) {
|
|||
}
|
||||
|
||||
VkDescriptorSetLayout new_from_host_VkDescriptorSetLayout(VkDescriptorSetLayout underlying) {
|
||||
struct goldfish_VkDescriptorSetLayout* res =
|
||||
static_cast<goldfish_VkDescriptorSetLayout*>(malloc(sizeof(goldfish_VkDescriptorSetLayout)));
|
||||
struct goldfish_VkDescriptorSetLayout* res = static_cast<goldfish_VkDescriptorSetLayout*>(
|
||||
malloc(sizeof(goldfish_VkDescriptorSetLayout)));
|
||||
res->underlying = (uint64_t)underlying;
|
||||
res->layoutInfo = nullptr;
|
||||
return reinterpret_cast<VkDescriptorSetLayout>(res);
|
||||
|
|
@ -188,7 +188,7 @@ VkDescriptorSetLayout new_from_host_u64_VkDescriptorSetLayout(uint64_t underlyin
|
|||
return new_from_host_VkDescriptorSetLayout((VkDescriptorSetLayout)underlying);
|
||||
}
|
||||
|
||||
} // extern "C"
|
||||
} // extern "C"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
|
@ -199,7 +199,11 @@ void appendObject(struct goldfish_vk_object_list** begin, void* val) {
|
|||
o->next = nullptr;
|
||||
o->obj = val;
|
||||
D("new ptr: %p", o);
|
||||
if (!*begin) { D("first"); *begin = o; return; }
|
||||
if (!*begin) {
|
||||
D("first");
|
||||
*begin = o;
|
||||
return;
|
||||
}
|
||||
|
||||
struct goldfish_vk_object_list* q = *begin;
|
||||
struct goldfish_vk_object_list* p = q;
|
||||
|
|
@ -214,7 +218,7 @@ void appendObject(struct goldfish_vk_object_list** begin, void* val) {
|
|||
}
|
||||
|
||||
void eraseObject(struct goldfish_vk_object_list** begin, void* val) {
|
||||
D("for val %p", val);
|
||||
D("for val %p", val);
|
||||
if (!*begin) {
|
||||
D("val %p notfound", val);
|
||||
return;
|
||||
|
|
@ -241,7 +245,7 @@ void eraseObject(struct goldfish_vk_object_list** begin, void* val) {
|
|||
q = n;
|
||||
}
|
||||
|
||||
D("val %p notfound after looping", val);
|
||||
D("val %p notfound after looping", val);
|
||||
}
|
||||
|
||||
void eraseObjects(struct goldfish_vk_object_list** begin) {
|
||||
|
|
|
|||
|
|
@ -18,14 +18,13 @@
|
|||
#elif defined(__linux__)
|
||||
#include <vulkan/vk_icd.h>
|
||||
#endif
|
||||
#include <inttypes.h>
|
||||
#include <vulkan/vulkan.h>
|
||||
|
||||
#include "VulkanHandles.h"
|
||||
|
||||
#include <inttypes.h>
|
||||
|
||||
#include <functional>
|
||||
|
||||
#include "VulkanHandles.h"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace guest {
|
||||
class IOStream;
|
||||
|
|
@ -41,7 +40,6 @@ struct DescriptorSetLayoutInfo;
|
|||
} // namespace vk
|
||||
} // namespace gfxstream
|
||||
|
||||
|
||||
extern "C" {
|
||||
|
||||
struct goldfish_vk_object_list {
|
||||
|
|
@ -58,51 +56,45 @@ struct goldfish_vk_object_list {
|
|||
#endif
|
||||
|
||||
#define GOLDFISH_VK_DEFINE_DISPATCHABLE_HANDLE_STRUCT(type) \
|
||||
struct goldfish_##type { \
|
||||
DECLARE_HWVULKAN_DISPATCH \
|
||||
uint64_t underlying; \
|
||||
gfxstream::vk::VkEncoder* lastUsedEncoder; \
|
||||
uint32_t sequenceNumber; \
|
||||
gfxstream::vk::VkEncoder* privateEncoder; \
|
||||
gfxstream::guest::IOStream* privateStream; \
|
||||
uint32_t flags; \
|
||||
struct goldfish_vk_object_list* poolObjects; \
|
||||
struct goldfish_vk_object_list* subObjects; \
|
||||
struct goldfish_vk_object_list* superObjects; \
|
||||
void* userPtr; \
|
||||
}; \
|
||||
struct goldfish_##type { \
|
||||
DECLARE_HWVULKAN_DISPATCH \
|
||||
uint64_t underlying; \
|
||||
gfxstream::vk::VkEncoder* lastUsedEncoder; \
|
||||
uint32_t sequenceNumber; \
|
||||
gfxstream::vk::VkEncoder* privateEncoder; \
|
||||
gfxstream::guest::IOStream* privateStream; \
|
||||
uint32_t flags; \
|
||||
struct goldfish_vk_object_list* poolObjects; \
|
||||
struct goldfish_vk_object_list* subObjects; \
|
||||
struct goldfish_vk_object_list* superObjects; \
|
||||
void* userPtr; \
|
||||
};
|
||||
|
||||
#define GOLDFISH_VK_DEFINE_TRIVIAL_NON_DISPATCHABLE_HANDLE_STRUCT(type) \
|
||||
struct goldfish_##type { \
|
||||
uint64_t underlying; \
|
||||
struct goldfish_vk_object_list* poolObjects; \
|
||||
struct goldfish_vk_object_list* subObjects; \
|
||||
struct goldfish_vk_object_list* superObjects; \
|
||||
void* userPtr; \
|
||||
}; \
|
||||
struct goldfish_##type { \
|
||||
uint64_t underlying; \
|
||||
struct goldfish_vk_object_list* poolObjects; \
|
||||
struct goldfish_vk_object_list* subObjects; \
|
||||
struct goldfish_vk_object_list* superObjects; \
|
||||
void* userPtr; \
|
||||
};
|
||||
|
||||
#define GOLDFISH_VK_NEW_FROM_HOST_DECL(type) \
|
||||
type new_from_host_##type(type);
|
||||
#define GOLDFISH_VK_NEW_FROM_HOST_DECL(type) type new_from_host_##type(type);
|
||||
|
||||
#define GOLDFISH_VK_AS_GOLDFISH_DECL(type) \
|
||||
struct goldfish_##type* as_goldfish_##type(type);
|
||||
#define GOLDFISH_VK_AS_GOLDFISH_DECL(type) struct goldfish_##type* as_goldfish_##type(type);
|
||||
|
||||
#define GOLDFISH_VK_GET_HOST_DECL(type) \
|
||||
type get_host_##type(type);
|
||||
#define GOLDFISH_VK_GET_HOST_DECL(type) type get_host_##type(type);
|
||||
|
||||
#define GOLDFISH_VK_DELETE_GOLDFISH_DECL(type) \
|
||||
void delete_goldfish_##type(type);
|
||||
#define GOLDFISH_VK_DELETE_GOLDFISH_DECL(type) void delete_goldfish_##type(type);
|
||||
|
||||
#define GOLDFISH_VK_IDENTITY_DECL(type) \
|
||||
type vk_handle_identity_##type(type);
|
||||
#define GOLDFISH_VK_IDENTITY_DECL(type) type vk_handle_identity_##type(type);
|
||||
|
||||
#define GOLDFISH_VK_NEW_FROM_HOST_U64_DECL(type) \
|
||||
type new_from_host_u64_##type(uint64_t);
|
||||
#define GOLDFISH_VK_NEW_FROM_HOST_U64_DECL(type) type new_from_host_u64_##type(uint64_t);
|
||||
|
||||
#define GOLDFISH_VK_GET_HOST_U64_DECL(type) \
|
||||
uint64_t get_host_u64_##type(type);
|
||||
#define GOLDFISH_VK_GET_HOST_U64_DECL(type) uint64_t get_host_u64_##type(type);
|
||||
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_DEFINE_DISPATCHABLE_HANDLE_STRUCT)
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_DISPATCHABLE_HANDLE_TYPES(
|
||||
GOLDFISH_VK_DEFINE_DISPATCHABLE_HANDLE_STRUCT)
|
||||
GOLDFISH_VK_LIST_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_NEW_FROM_HOST_DECL)
|
||||
GOLDFISH_VK_LIST_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_AS_GOLDFISH_DECL)
|
||||
GOLDFISH_VK_LIST_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_GET_HOST_DECL)
|
||||
|
|
@ -118,7 +110,8 @@ GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_DELETE_GOLDFISH_DECL)
|
|||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_IDENTITY_DECL)
|
||||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_NEW_FROM_HOST_U64_DECL)
|
||||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_GET_HOST_U64_DECL)
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(GOLDFISH_VK_DEFINE_TRIVIAL_NON_DISPATCHABLE_HANDLE_STRUCT)
|
||||
GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(
|
||||
GOLDFISH_VK_DEFINE_TRIVIAL_NON_DISPATCHABLE_HANDLE_STRUCT)
|
||||
|
||||
struct goldfish_VkDescriptorPool {
|
||||
uint64_t underlying;
|
||||
|
|
@ -151,7 +144,7 @@ struct goldfish_VkCommandBuffer {
|
|||
VkDevice device;
|
||||
};
|
||||
|
||||
} // extern "C"
|
||||
} // extern "C"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
|
|
|||
|
|
@ -13,19 +13,15 @@
|
|||
// limitations under the License.
|
||||
#include "Validation.h"
|
||||
|
||||
#include "Resources.h"
|
||||
#include "ResourceTracker.h"
|
||||
#include "Resources.h"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
||||
VkResult Validation::on_vkFlushMappedMemoryRanges(
|
||||
void*,
|
||||
VkResult,
|
||||
VkDevice,
|
||||
uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges) {
|
||||
|
||||
VkResult Validation::on_vkFlushMappedMemoryRanges(void*, VkResult, VkDevice,
|
||||
uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges) {
|
||||
auto resources = ResourceTracker::get();
|
||||
|
||||
for (uint32_t i = 0; i < memoryRangeCount; ++i) {
|
||||
|
|
@ -37,13 +33,9 @@ VkResult Validation::on_vkFlushMappedMemoryRanges(
|
|||
return VK_SUCCESS;
|
||||
}
|
||||
|
||||
VkResult Validation::on_vkInvalidateMappedMemoryRanges(
|
||||
void*,
|
||||
VkResult,
|
||||
VkDevice,
|
||||
uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges) {
|
||||
|
||||
VkResult Validation::on_vkInvalidateMappedMemoryRanges(void*, VkResult, VkDevice,
|
||||
uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges) {
|
||||
auto resources = ResourceTracker::get();
|
||||
|
||||
for (uint32_t i = 0; i < memoryRangeCount; ++i) {
|
||||
|
|
|
|||
|
|
@ -19,19 +19,13 @@ namespace gfxstream {
|
|||
namespace vk {
|
||||
|
||||
class Validation {
|
||||
public:
|
||||
VkResult on_vkFlushMappedMemoryRanges(
|
||||
void* context,
|
||||
VkResult input_result,
|
||||
VkDevice device,
|
||||
uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges);
|
||||
VkResult on_vkInvalidateMappedMemoryRanges(
|
||||
void* context,
|
||||
VkResult input_result,
|
||||
VkDevice device,
|
||||
uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges);
|
||||
public:
|
||||
VkResult on_vkFlushMappedMemoryRanges(void* context, VkResult input_result, VkDevice device,
|
||||
uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges);
|
||||
VkResult on_vkInvalidateMappedMemoryRanges(void* context, VkResult input_result,
|
||||
VkDevice device, uint32_t memoryRangeCount,
|
||||
const VkMappedMemoryRange* pMemoryRanges);
|
||||
};
|
||||
|
||||
} // namespace vk
|
||||
|
|
|
|||
|
|
@ -12,24 +12,29 @@
|
|||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#include <vulkan/vulkan.h>
|
||||
|
||||
#include "VulkanHandleMapping.h"
|
||||
|
||||
#include <vulkan/vulkan.h>
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
||||
#define DEFAULT_HANDLE_MAP_DEFINE(type) \
|
||||
void DefaultHandleMapping::mapHandles_##type(type*, size_t) { return; } \
|
||||
void DefaultHandleMapping::mapHandles_##type##_u64(const type* handles, uint64_t* handle_u64s, size_t count) { \
|
||||
for (size_t i = 0; i < count; ++i) { handle_u64s[i] = (uint64_t)(uintptr_t)handles[i]; } \
|
||||
} \
|
||||
void DefaultHandleMapping::mapHandles_u64_##type(const uint64_t* handle_u64s, type* handles, size_t count) { \
|
||||
for (size_t i = 0; i < count; ++i) { handles[i] = (type)(uintptr_t)handle_u64s[i]; } \
|
||||
} \
|
||||
#define DEFAULT_HANDLE_MAP_DEFINE(type) \
|
||||
void DefaultHandleMapping::mapHandles_##type(type*, size_t) { return; } \
|
||||
void DefaultHandleMapping::mapHandles_##type##_u64(const type* handles, uint64_t* handle_u64s, \
|
||||
size_t count) { \
|
||||
for (size_t i = 0; i < count; ++i) { \
|
||||
handle_u64s[i] = (uint64_t)(uintptr_t)handles[i]; \
|
||||
} \
|
||||
} \
|
||||
void DefaultHandleMapping::mapHandles_u64_##type(const uint64_t* handle_u64s, type* handles, \
|
||||
size_t count) { \
|
||||
for (size_t i = 0; i < count; ++i) { \
|
||||
handles[i] = (type)(uintptr_t)handle_u64s[i]; \
|
||||
} \
|
||||
}
|
||||
|
||||
GOLDFISH_VK_LIST_HANDLE_TYPES(DEFAULT_HANDLE_MAP_DEFINE)
|
||||
|
||||
} // namespace vk
|
||||
} // namespace gfxstream
|
||||
|
||||
|
|
|
|||
|
|
@ -22,26 +22,29 @@ namespace gfxstream {
|
|||
namespace vk {
|
||||
|
||||
class VulkanHandleMapping {
|
||||
public:
|
||||
public:
|
||||
VulkanHandleMapping() = default;
|
||||
virtual ~VulkanHandleMapping() { }
|
||||
virtual ~VulkanHandleMapping() {}
|
||||
|
||||
#define DECLARE_HANDLE_MAP_PURE_VIRTUAL_METHOD(type) \
|
||||
virtual void mapHandles_##type(type* handles, size_t count = 1) = 0; \
|
||||
virtual void mapHandles_##type##_u64(const type* handles, uint64_t* handle_u64s, size_t count = 1) = 0; \
|
||||
virtual void mapHandles_u64_##type(const uint64_t* handle_u64s, type* handles, size_t count = 1) = 0; \
|
||||
#define DECLARE_HANDLE_MAP_PURE_VIRTUAL_METHOD(type) \
|
||||
virtual void mapHandles_##type(type* handles, size_t count = 1) = 0; \
|
||||
virtual void mapHandles_##type##_u64(const type* handles, uint64_t* handle_u64s, \
|
||||
size_t count = 1) = 0; \
|
||||
virtual void mapHandles_u64_##type(const uint64_t* handle_u64s, type* handles, \
|
||||
size_t count = 1) = 0;
|
||||
|
||||
GOLDFISH_VK_LIST_HANDLE_TYPES(DECLARE_HANDLE_MAP_PURE_VIRTUAL_METHOD)
|
||||
};
|
||||
|
||||
class DefaultHandleMapping : public VulkanHandleMapping {
|
||||
public:
|
||||
virtual ~DefaultHandleMapping() { }
|
||||
public:
|
||||
virtual ~DefaultHandleMapping() {}
|
||||
|
||||
#define DECLARE_HANDLE_MAP_OVERRIDE(type) \
|
||||
void mapHandles_##type(type* handles, size_t count) override; \
|
||||
void mapHandles_##type##_u64(const type* handles, uint64_t* handle_u64s, size_t count) override; \
|
||||
void mapHandles_u64_##type(const uint64_t* handle_u64s, type* handles, size_t count) override; \
|
||||
#define DECLARE_HANDLE_MAP_OVERRIDE(type) \
|
||||
void mapHandles_##type(type* handles, size_t count) override; \
|
||||
void mapHandles_##type##_u64(const type* handles, uint64_t* handle_u64s, size_t count) \
|
||||
override; \
|
||||
void mapHandles_u64_##type(const uint64_t* handle_u64s, type* handles, size_t count) override;
|
||||
|
||||
GOLDFISH_VK_LIST_HANDLE_TYPES(DECLARE_HANDLE_MAP_OVERRIDE)
|
||||
};
|
||||
|
|
|
|||
|
|
@ -19,77 +19,75 @@
|
|||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
||||
#define GOLDFISH_VK_LIST_TRIVIAL_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkPhysicalDevice) \
|
||||
#define GOLDFISH_VK_LIST_TRIVIAL_DISPATCHABLE_HANDLE_TYPES(f) f(VkPhysicalDevice)
|
||||
|
||||
#define GOLDFISH_VK_LIST_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkInstance) \
|
||||
f(VkDevice) \
|
||||
f(VkCommandBuffer) \
|
||||
f(VkQueue) \
|
||||
f(VkInstance) \
|
||||
f(VkDevice) \
|
||||
f(VkCommandBuffer) \
|
||||
f(VkQueue) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_DISPATCHABLE_HANDLE_TYPES(f)
|
||||
|
||||
#ifdef VK_NVX_binary_import
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NVX_BINARY_IMPORT(f) \
|
||||
f(VkCuModuleNVX) \
|
||||
f(VkCuFunctionNVX) \
|
||||
f(VkCuModuleNVX) \
|
||||
f(VkCuFunctionNVX)
|
||||
|
||||
#else
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NVX_BINARY_IMPORT(f)
|
||||
|
||||
#endif // VK_NVX_binary_import
|
||||
#endif // VK_NVX_binary_import
|
||||
|
||||
#ifdef VK_NVX_device_generated_commands
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NVX_DEVICE_GENERATED_COMMANDS(f) \
|
||||
f(VkObjectTableNVX) \
|
||||
f(VkIndirectCommandsLayoutNVX) \
|
||||
f(VkObjectTableNVX) \
|
||||
f(VkIndirectCommandsLayoutNVX)
|
||||
|
||||
#else
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NVX_DEVICE_GENERATED_COMMANDS(f)
|
||||
|
||||
#endif // VK_NVX_device_generated_commands
|
||||
#endif // VK_NVX_device_generated_commands
|
||||
|
||||
#ifdef VK_NV_device_generated_commands
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_DEVICE_GENERATED_COMMANDS(f) \
|
||||
f(VkIndirectCommandsLayoutNV) \
|
||||
f(VkIndirectCommandsLayoutNV)
|
||||
|
||||
#else
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_DEVICE_GENERATED_COMMANDS(f)
|
||||
|
||||
#endif // VK_NV_device_generated_commands
|
||||
#endif // VK_NV_device_generated_commands
|
||||
|
||||
#ifdef VK_NV_ray_tracing
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_RAY_TRACING(f) \
|
||||
f(VkAccelerationStructureNV) \
|
||||
f(VkAccelerationStructureNV)
|
||||
|
||||
#else
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_RAY_TRACING(f)
|
||||
|
||||
#endif // VK_NV_ray_tracing
|
||||
#endif // VK_NV_ray_tracing
|
||||
|
||||
#ifdef VK_KHR_acceleration_structure
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_KHR_ACCELERATION_STRUCTURE(f) \
|
||||
f(VkAccelerationStructureKHR) \
|
||||
f(VkAccelerationStructureKHR)
|
||||
|
||||
#else
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_KHR_ACCELERATION_STRUCTURE(f)
|
||||
|
||||
#endif // VK_KHR_acceleration_structure
|
||||
#endif // VK_KHR_acceleration_structure
|
||||
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_FUCHSIA(f) \
|
||||
f(VkBufferCollectionFUCHSIA)
|
||||
#define __GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_FUCHSIA(f) f(VkBufferCollectionFUCHSIA)
|
||||
|
||||
#else
|
||||
|
||||
|
|
@ -97,77 +95,77 @@ namespace vk {
|
|||
|
||||
#endif // VK_USE_PLATFORM_FUCHSIA
|
||||
|
||||
#define GOLDFISH_VK_LIST_TRIVIAL_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkBufferView) \
|
||||
f(VkImageView) \
|
||||
f(VkShaderModule) \
|
||||
f(VkPipeline) \
|
||||
f(VkPipelineCache) \
|
||||
f(VkPipelineLayout) \
|
||||
f(VkRenderPass) \
|
||||
f(VkFramebuffer) \
|
||||
f(VkEvent) \
|
||||
f(VkQueryPool) \
|
||||
f(VkSamplerYcbcrConversion) \
|
||||
f(VkSurfaceKHR) \
|
||||
f(VkSwapchainKHR) \
|
||||
f(VkDisplayKHR) \
|
||||
f(VkDisplayModeKHR) \
|
||||
f(VkValidationCacheEXT) \
|
||||
f(VkDebugReportCallbackEXT) \
|
||||
f(VkDebugUtilsMessengerEXT) \
|
||||
f(VkMicromapEXT) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NVX_BINARY_IMPORT(f) \
|
||||
#define GOLDFISH_VK_LIST_TRIVIAL_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkBufferView) \
|
||||
f(VkImageView) \
|
||||
f(VkShaderModule) \
|
||||
f(VkPipeline) \
|
||||
f(VkPipelineCache) \
|
||||
f(VkPipelineLayout) \
|
||||
f(VkRenderPass) \
|
||||
f(VkFramebuffer) \
|
||||
f(VkEvent) \
|
||||
f(VkQueryPool) \
|
||||
f(VkSamplerYcbcrConversion) \
|
||||
f(VkSurfaceKHR) \
|
||||
f(VkSwapchainKHR) \
|
||||
f(VkDisplayKHR) \
|
||||
f(VkDisplayModeKHR) \
|
||||
f(VkValidationCacheEXT) \
|
||||
f(VkDebugReportCallbackEXT) \
|
||||
f(VkDebugUtilsMessengerEXT) \
|
||||
f(VkMicromapEXT) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NVX_BINARY_IMPORT(f) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NVX_DEVICE_GENERATED_COMMANDS(f) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_DEVICE_GENERATED_COMMANDS(f) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_RAY_TRACING(f) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_KHR_ACCELERATION_STRUCTURE(f) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_DEVICE_GENERATED_COMMANDS(f) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_NV_RAY_TRACING(f) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_KHR_ACCELERATION_STRUCTURE(f)
|
||||
|
||||
#define GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkDeviceMemory) \
|
||||
f(VkBuffer) \
|
||||
f(VkImage) \
|
||||
f(VkSemaphore) \
|
||||
f(VkDescriptorUpdateTemplate) \
|
||||
f(VkFence) \
|
||||
f(VkDescriptorPool) \
|
||||
f(VkDescriptorSet) \
|
||||
f(VkDescriptorSetLayout) \
|
||||
f(VkCommandPool) \
|
||||
f(VkSampler) \
|
||||
#define GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkDeviceMemory) \
|
||||
f(VkBuffer) \
|
||||
f(VkImage) \
|
||||
f(VkSemaphore) \
|
||||
f(VkDescriptorUpdateTemplate) \
|
||||
f(VkFence) \
|
||||
f(VkDescriptorPool) \
|
||||
f(VkDescriptorSet) \
|
||||
f(VkDescriptorSetLayout) \
|
||||
f(VkCommandPool) \
|
||||
f(VkSampler) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_FUCHSIA(f) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_NON_DISPATCHABLE_HANDLE_TYPES(f)
|
||||
|
||||
#define GOLDFISH_VK_LIST_HANDLE_TYPES(f) \
|
||||
#define GOLDFISH_VK_LIST_HANDLE_TYPES(f) \
|
||||
GOLDFISH_VK_LIST_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES(f)
|
||||
|
||||
#define GOLDFISH_VK_LIST_TRIVIAL_HANDLE_TYPES(f) \
|
||||
#define GOLDFISH_VK_LIST_TRIVIAL_HANDLE_TYPES(f) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_NON_DISPATCHABLE_HANDLE_TYPES(f)
|
||||
|
||||
#define GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkInstance) \
|
||||
f(VkDevice) \
|
||||
f(VkQueue) \
|
||||
f(VkInstance) \
|
||||
f(VkDevice) \
|
||||
f(VkQueue) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_DISPATCHABLE_HANDLE_TYPES(f)
|
||||
|
||||
#define GOLDFISH_VK_LIST_AUTODEFINED_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkDeviceMemory) \
|
||||
f(VkBuffer) \
|
||||
f(VkImage) \
|
||||
f(VkSemaphore) \
|
||||
f(VkFence) \
|
||||
f(VkDescriptorUpdateTemplate) \
|
||||
f(VkCommandPool) \
|
||||
f(VkSampler) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_FUCHSIA(f) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkDeviceMemory) \
|
||||
f(VkBuffer) \
|
||||
f(VkImage) \
|
||||
f(VkSemaphore) \
|
||||
f(VkFence) \
|
||||
f(VkDescriptorUpdateTemplate) \
|
||||
f(VkCommandPool) \
|
||||
f(VkSampler) \
|
||||
__GOLDFISH_VK_LIST_NON_DISPATCHABLE_HANDLE_TYPES_FUCHSIA(f) \
|
||||
GOLDFISH_VK_LIST_TRIVIAL_NON_DISPATCHABLE_HANDLE_TYPES(f)
|
||||
|
||||
#define GOLDFISH_VK_LIST_MANUAL_STRUCT_NON_DISPATCHABLE_HANDLE_TYPES(f) \
|
||||
f(VkDescriptorPool) \
|
||||
f(VkDescriptorSetLayout) \
|
||||
f(VkDescriptorSet) \
|
||||
f(VkDescriptorPool) \
|
||||
f(VkDescriptorSetLayout) \
|
||||
f(VkDescriptorSet)
|
||||
|
||||
} // namespace vk
|
||||
} // namespace gfxstream
|
||||
|
|
|
|||
|
|
@ -16,16 +16,14 @@
|
|||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
||||
VulkanStreamGuest::VulkanStreamGuest(gfxstream::guest::IOStream *stream): mStream(stream) {
|
||||
VulkanStreamGuest::VulkanStreamGuest(gfxstream::guest::IOStream* stream) : mStream(stream) {
|
||||
unsetHandleMapping();
|
||||
mFeatureBits = ResourceTracker::get()->getStreamFeatures();
|
||||
}
|
||||
|
||||
VulkanStreamGuest::~VulkanStreamGuest() = default;
|
||||
|
||||
bool VulkanStreamGuest::valid() {
|
||||
return true;
|
||||
}
|
||||
bool VulkanStreamGuest::valid() { return true; }
|
||||
|
||||
void VulkanStreamGuest::alloc(void** ptrAddr, size_t bytes) {
|
||||
if (!bytes) {
|
||||
|
|
@ -56,7 +54,7 @@ void VulkanStreamGuest::loadStringArrayInPlace(char*** forOutput) {
|
|||
|
||||
alloc((void**)forOutput, count * sizeof(char*));
|
||||
|
||||
char **stringsForOutput = *forOutput;
|
||||
char** stringsForOutput = *forOutput;
|
||||
|
||||
for (size_t i = 0; i < count; i++) {
|
||||
loadStringInPlace(stringsForOutput + i);
|
||||
|
|
@ -79,8 +77,9 @@ void VulkanStreamGuest::loadStringInPlaceWithStreamPtr(char** forOutput, uint8_t
|
|||
}
|
||||
}
|
||||
|
||||
void VulkanStreamGuest::loadStringArrayInPlaceWithStreamPtr(char*** forOutput, uint8_t** streamPtr) {
|
||||
uint32_t count;
|
||||
void VulkanStreamGuest::loadStringArrayInPlaceWithStreamPtr(char*** forOutput,
|
||||
uint8_t** streamPtr) {
|
||||
uint32_t count;
|
||||
memcpy(&count, *streamPtr, sizeof(uint32_t));
|
||||
*streamPtr += sizeof(uint32_t);
|
||||
gfxstream::guest::Stream::fromBe32((uint8_t*)&count);
|
||||
|
|
@ -91,15 +90,14 @@ void VulkanStreamGuest::loadStringArrayInPlaceWithStreamPtr(char*** forOutput, u
|
|||
|
||||
alloc((void**)forOutput, count * sizeof(char*));
|
||||
|
||||
char **stringsForOutput = *forOutput;
|
||||
char** stringsForOutput = *forOutput;
|
||||
|
||||
for (size_t i = 0; i < count; i++) {
|
||||
loadStringInPlaceWithStreamPtr(stringsForOutput + i, streamPtr);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
ssize_t VulkanStreamGuest::read(void *buffer, size_t size) {
|
||||
ssize_t VulkanStreamGuest::read(void* buffer, size_t size) {
|
||||
if (!mStream->readback(buffer, size)) {
|
||||
ALOGE("FATAL: Could not read back %zu bytes", size);
|
||||
abort();
|
||||
|
|
@ -107,7 +105,7 @@ ssize_t VulkanStreamGuest::read(void *buffer, size_t size) {
|
|||
return size;
|
||||
}
|
||||
|
||||
ssize_t VulkanStreamGuest::write(const void *buffer, size_t size) {
|
||||
ssize_t VulkanStreamGuest::write(const void* buffer, size_t size) {
|
||||
uint8_t* streamBuf = (uint8_t*)mStream->alloc(size);
|
||||
memcpy(streamBuf, buffer, size);
|
||||
return size;
|
||||
|
|
@ -117,44 +115,30 @@ void VulkanStreamGuest::writeLarge(const void* buffer, size_t size) {
|
|||
mStream->writeFullyAsync(buffer, size);
|
||||
}
|
||||
|
||||
void VulkanStreamGuest::clearPool() {
|
||||
mPool.freeAll();
|
||||
}
|
||||
void VulkanStreamGuest::clearPool() { mPool.freeAll(); }
|
||||
|
||||
void VulkanStreamGuest::setHandleMapping(VulkanHandleMapping* mapping) {
|
||||
mCurrentHandleMapping = mapping;
|
||||
}
|
||||
|
||||
void VulkanStreamGuest::unsetHandleMapping() {
|
||||
mCurrentHandleMapping = &mDefaultHandleMapping;
|
||||
}
|
||||
void VulkanStreamGuest::unsetHandleMapping() { mCurrentHandleMapping = &mDefaultHandleMapping; }
|
||||
|
||||
VulkanHandleMapping* VulkanStreamGuest::handleMapping() const {
|
||||
return mCurrentHandleMapping;
|
||||
}
|
||||
VulkanHandleMapping* VulkanStreamGuest::handleMapping() const { return mCurrentHandleMapping; }
|
||||
|
||||
void VulkanStreamGuest::flush() {
|
||||
AEMU_SCOPED_TRACE("VulkanStreamGuest device write");
|
||||
mStream->flush();
|
||||
}
|
||||
|
||||
uint32_t VulkanStreamGuest::getFeatureBits() const {
|
||||
return mFeatureBits;
|
||||
}
|
||||
uint32_t VulkanStreamGuest::getFeatureBits() const { return mFeatureBits; }
|
||||
|
||||
void VulkanStreamGuest::incStreamRef() {
|
||||
mStream->incRef();
|
||||
}
|
||||
void VulkanStreamGuest::incStreamRef() { mStream->incRef(); }
|
||||
|
||||
bool VulkanStreamGuest::decStreamRef() {
|
||||
return mStream->decRef();
|
||||
}
|
||||
bool VulkanStreamGuest::decStreamRef() { return mStream->decRef(); }
|
||||
|
||||
uint8_t* VulkanStreamGuest::reserve(size_t size) {
|
||||
return (uint8_t*)mStream->alloc(size);
|
||||
}
|
||||
uint8_t* VulkanStreamGuest::reserve(size_t size) { return (uint8_t*)mStream->alloc(size); }
|
||||
|
||||
VulkanCountingStream::VulkanCountingStream() : VulkanStreamGuest(nullptr) { }
|
||||
VulkanCountingStream::VulkanCountingStream() : VulkanStreamGuest(nullptr) {}
|
||||
VulkanCountingStream::~VulkanCountingStream() = default;
|
||||
|
||||
ssize_t VulkanCountingStream::read(void*, size_t size) {
|
||||
|
|
|
|||
|
|
@ -13,30 +13,26 @@
|
|||
// limitations under the License.
|
||||
#pragma once
|
||||
|
||||
#include "aemu/base/files/Stream.h"
|
||||
#include "aemu/base/files/StreamSerializing.h"
|
||||
#include <inttypes.h>
|
||||
#include <log/log.h>
|
||||
|
||||
#include "goldfish_vk_private_defs.h"
|
||||
|
||||
#include "VulkanHandleMapping.h"
|
||||
#include <memory>
|
||||
#include <vector>
|
||||
|
||||
#include "gfxstream/guest/IOStream.h"
|
||||
#include "ResourceTracker.h"
|
||||
|
||||
#include "VulkanHandleMapping.h"
|
||||
#include "aemu/base/BumpPool.h"
|
||||
#include "aemu/base/Tracing.h"
|
||||
|
||||
#include <vector>
|
||||
#include <memory>
|
||||
|
||||
#include <log/log.h>
|
||||
#include <inttypes.h>
|
||||
#include "aemu/base/files/Stream.h"
|
||||
#include "aemu/base/files/StreamSerializing.h"
|
||||
#include "goldfish_vk_private_defs.h"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
||||
class VulkanStreamGuest : public gfxstream::guest::Stream {
|
||||
public:
|
||||
public:
|
||||
VulkanStreamGuest(gfxstream::guest::IOStream* stream);
|
||||
~VulkanStreamGuest();
|
||||
|
||||
|
|
@ -55,8 +51,8 @@ public:
|
|||
void loadStringInPlaceWithStreamPtr(char** forOutput, uint8_t** streamPtr);
|
||||
void loadStringArrayInPlaceWithStreamPtr(char*** forOutput, uint8_t** streamPtr);
|
||||
|
||||
ssize_t read(void *buffer, size_t size) override;
|
||||
ssize_t write(const void *buffer, size_t size) override;
|
||||
ssize_t read(void* buffer, size_t size) override;
|
||||
ssize_t write(const void* buffer, size_t size) override;
|
||||
|
||||
void writeLarge(const void* buffer, size_t size);
|
||||
|
||||
|
|
@ -75,7 +71,8 @@ public:
|
|||
bool decStreamRef();
|
||||
|
||||
uint8_t* reserve(size_t size);
|
||||
private:
|
||||
|
||||
private:
|
||||
gfxstream::guest::BumpPool mPool;
|
||||
std::vector<uint8_t> mWriteBuffer;
|
||||
gfxstream::guest::IOStream* mStream = nullptr;
|
||||
|
|
@ -85,18 +82,19 @@ private:
|
|||
};
|
||||
|
||||
class VulkanCountingStream : public VulkanStreamGuest {
|
||||
public:
|
||||
public:
|
||||
VulkanCountingStream();
|
||||
~VulkanCountingStream();
|
||||
|
||||
ssize_t read(void *buffer, size_t size) override;
|
||||
ssize_t write(const void *buffer, size_t size) override;
|
||||
ssize_t read(void* buffer, size_t size) override;
|
||||
ssize_t write(const void* buffer, size_t size) override;
|
||||
|
||||
size_t bytesWritten() const { return m_written; }
|
||||
size_t bytesRead() const { return m_read; }
|
||||
|
||||
void rewind();
|
||||
private:
|
||||
|
||||
private:
|
||||
size_t m_written = 0;
|
||||
size_t m_read = 0;
|
||||
};
|
||||
|
|
|
|||
53
src/gfxstream/guest/vulkan_enc/gfxstream_vk_private.cpp
Normal file
53
src/gfxstream/guest/vulkan_enc/gfxstream_vk_private.cpp
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
// Copyright (C) 2023 The Android Open Source Project
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "gfxstream_vk_private.h"
|
||||
|
||||
#include "vk_sync_dummy.h"
|
||||
|
||||
static bool isNoopSemaphore(gfxstream_vk_semaphore* semaphore) {
|
||||
/* Under the assumption that Mesa VK runtime queue submission is used, WSI flow
|
||||
* sets this temporary state to a dummy sync type (when no explicit dma-buf
|
||||
* synchronization is available). For gfxstream case, ignore this semaphore
|
||||
* when this is the case. Synchronization will be done on the host.
|
||||
*/
|
||||
return (semaphore && semaphore->vk.temporary &&
|
||||
vk_sync_type_is_dummy(semaphore->vk.temporary->type));
|
||||
}
|
||||
|
||||
std::vector<VkSemaphore> transformVkSemaphoreList(const VkSemaphore* pSemaphores,
|
||||
uint32_t semaphoreCount) {
|
||||
std::vector<VkSemaphore> outSemaphores;
|
||||
for (uint32_t j = 0; j < semaphoreCount; ++j) {
|
||||
VK_FROM_HANDLE(gfxstream_vk_semaphore, gfxstream_semaphore, pSemaphores[j]);
|
||||
if (!isNoopSemaphore(gfxstream_semaphore)) {
|
||||
outSemaphores.push_back(gfxstream_semaphore->internal_object);
|
||||
}
|
||||
}
|
||||
return outSemaphores;
|
||||
}
|
||||
|
||||
std::vector<VkSemaphoreSubmitInfo> transformVkSemaphoreSubmitInfoList(
|
||||
const VkSemaphoreSubmitInfo* pSemaphoreSubmitInfos, uint32_t semaphoreSubmitInfoCount) {
|
||||
std::vector<VkSemaphoreSubmitInfo> outSemaphoreSubmitInfo;
|
||||
for (uint32_t j = 0; j < semaphoreSubmitInfoCount; ++j) {
|
||||
VkSemaphoreSubmitInfo outInfo = pSemaphoreSubmitInfos[j];
|
||||
VK_FROM_HANDLE(gfxstream_vk_semaphore, gfxstream_semaphore, outInfo.semaphore);
|
||||
if (!isNoopSemaphore(gfxstream_semaphore)) {
|
||||
outInfo.semaphore = gfxstream_semaphore->internal_object;
|
||||
outSemaphoreSubmitInfo.push_back(outInfo);
|
||||
}
|
||||
}
|
||||
return outSemaphoreSubmitInfo;
|
||||
}
|
||||
246
src/gfxstream/guest/vulkan_enc/gfxstream_vk_private.h
Normal file
246
src/gfxstream/guest/vulkan_enc/gfxstream_vk_private.h
Normal file
|
|
@ -0,0 +1,246 @@
|
|||
/*
|
||||
* Copyright © 2023 Google Inc.
|
||||
*
|
||||
* derived from panvk_private.h driver which is:
|
||||
* Copyright © 2021 Collabora Ltd.
|
||||
* Copyright © 2016 Red Hat.
|
||||
* Copyright © 2016 Bas Nieuwenhuizen
|
||||
* Copyright © 2015 Intel Corporation
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
* copy of this software and associated documentation files (the "Software"),
|
||||
* to deal in the Software without restriction, including without limitation
|
||||
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
* and/or sell copies of the Software, and to permit persons to whom the
|
||||
* Software is furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice (including the next
|
||||
* paragraph) shall be included in all copies or substantial portions of the
|
||||
* Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
* DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#ifndef GFXSTREAM_VK_PRIVATE_H
|
||||
#define GFXSTREAM_VK_PRIVATE_H
|
||||
|
||||
#include <assert.h>
|
||||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <vulkan/vk_icd.h>
|
||||
#include <vulkan/vulkan.h>
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include "gfxstream_vk_entrypoints.h"
|
||||
#include "vk_alloc.h"
|
||||
#include "vk_buffer.h"
|
||||
#include "vk_buffer_view.h"
|
||||
#include "vk_command_buffer.h"
|
||||
#include "vk_command_pool.h"
|
||||
#include "vk_descriptor_update_template.h"
|
||||
#include "vk_device.h"
|
||||
#include "vk_device_memory.h"
|
||||
#include "vk_extensions.h"
|
||||
#include "vk_fence.h"
|
||||
#include "vk_image.h"
|
||||
#include "vk_instance.h"
|
||||
#include "vk_log.h"
|
||||
#include "vk_object.h"
|
||||
#include "vk_physical_device.h"
|
||||
#include "vk_query_pool.h"
|
||||
#include "vk_queue.h"
|
||||
#include "vk_semaphore.h"
|
||||
#include "vulkan/wsi/wsi_common.h"
|
||||
|
||||
struct gfxstream_vk_instance {
|
||||
struct vk_instance vk;
|
||||
uint32_t api_version;
|
||||
VkInstance internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_physical_device {
|
||||
struct vk_physical_device vk;
|
||||
|
||||
struct wsi_device wsi_device;
|
||||
const struct vk_sync_type* sync_types[2];
|
||||
struct gfxstream_vk_instance* instance;
|
||||
VkPhysicalDevice internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_device {
|
||||
struct vk_device vk;
|
||||
|
||||
struct vk_device_dispatch_table cmd_dispatch;
|
||||
struct gfxstream_vk_physical_device* physical_device;
|
||||
VkDevice internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_queue {
|
||||
struct vk_queue vk;
|
||||
struct gfxstream_vk_device* device;
|
||||
VkQueue internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_pipeline_cache {
|
||||
struct vk_object_base base;
|
||||
VkPipelineCache internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_device_memory {
|
||||
struct vk_device_memory vk;
|
||||
VkDeviceMemory internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_descriptor_set_layout {
|
||||
struct vk_object_base base;
|
||||
VkDescriptorSetLayout internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_pipeline_layout {
|
||||
struct vk_object_base base;
|
||||
VkPipelineLayout internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_descriptor_pool {
|
||||
struct vk_object_base base;
|
||||
VkDescriptorPool internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_buffer {
|
||||
struct vk_buffer vk;
|
||||
VkBuffer internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_command_pool {
|
||||
struct vk_command_pool vk;
|
||||
VkCommandPool internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_command_buffer {
|
||||
struct vk_command_buffer vk;
|
||||
VkCommandBuffer internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_event {
|
||||
struct vk_object_base base;
|
||||
VkEvent internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_pipeline {
|
||||
struct vk_object_base base;
|
||||
VkPipeline internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_image {
|
||||
struct vk_image vk;
|
||||
VkImage internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_image_view {
|
||||
struct vk_image_view vk;
|
||||
VkImageView internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_buffer_view {
|
||||
struct vk_buffer_view vk;
|
||||
VkBufferView internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_framebuffer {
|
||||
struct vk_object_base base;
|
||||
VkFramebuffer internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_render_pass {
|
||||
struct vk_object_base base;
|
||||
VkRenderPass internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_fence {
|
||||
struct vk_fence vk;
|
||||
VkFence internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_semaphore {
|
||||
struct vk_semaphore vk;
|
||||
VkSemaphore internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_query_pool {
|
||||
struct vk_query_pool vk;
|
||||
VkQueryPool internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_shader_module {
|
||||
struct vk_object_base base;
|
||||
VkShaderModule internal_object;
|
||||
};
|
||||
|
||||
struct gfxstream_vk_descriptor_update_template {
|
||||
struct vk_object_base base;
|
||||
VkDescriptorUpdateTemplate internal_object;
|
||||
};
|
||||
|
||||
VK_DEFINE_HANDLE_CASTS(gfxstream_vk_command_buffer, vk.base, VkCommandBuffer,
|
||||
VK_OBJECT_TYPE_COMMAND_BUFFER)
|
||||
VK_DEFINE_HANDLE_CASTS(gfxstream_vk_device, vk.base, VkDevice, VK_OBJECT_TYPE_DEVICE)
|
||||
VK_DEFINE_HANDLE_CASTS(gfxstream_vk_instance, vk.base, VkInstance, VK_OBJECT_TYPE_INSTANCE)
|
||||
VK_DEFINE_HANDLE_CASTS(gfxstream_vk_physical_device, vk.base, VkPhysicalDevice,
|
||||
VK_OBJECT_TYPE_PHYSICAL_DEVICE)
|
||||
VK_DEFINE_HANDLE_CASTS(gfxstream_vk_queue, vk.base, VkQueue, VK_OBJECT_TYPE_QUEUE)
|
||||
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_command_pool, vk.base, VkCommandPool,
|
||||
VK_OBJECT_TYPE_COMMAND_POOL)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_buffer, vk.base, VkBuffer, VK_OBJECT_TYPE_BUFFER)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_buffer_view, vk.base, VkBufferView,
|
||||
VK_OBJECT_TYPE_BUFFER_VIEW)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_descriptor_pool, base, VkDescriptorPool,
|
||||
VK_OBJECT_TYPE_DESCRIPTOR_POOL)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_descriptor_set_layout, base, VkDescriptorSetLayout,
|
||||
VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_device_memory, vk.base, VkDeviceMemory,
|
||||
VK_OBJECT_TYPE_DEVICE_MEMORY)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_event, base, VkEvent, VK_OBJECT_TYPE_EVENT)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_framebuffer, base, VkFramebuffer,
|
||||
VK_OBJECT_TYPE_FRAMEBUFFER)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_image, vk.base, VkImage, VK_OBJECT_TYPE_IMAGE)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_image_view, vk.base, VkImageView,
|
||||
VK_OBJECT_TYPE_IMAGE_VIEW);
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_pipeline_cache, base, VkPipelineCache,
|
||||
VK_OBJECT_TYPE_PIPELINE_CACHE)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_pipeline, base, VkPipeline, VK_OBJECT_TYPE_PIPELINE)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_pipeline_layout, base, VkPipelineLayout,
|
||||
VK_OBJECT_TYPE_PIPELINE_LAYOUT)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_render_pass, base, VkRenderPass,
|
||||
VK_OBJECT_TYPE_RENDER_PASS)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_fence, vk.base, VkFence, VK_OBJECT_TYPE_FENCE)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_semaphore, vk.base, VkSemaphore,
|
||||
VK_OBJECT_TYPE_SEMAPHORE)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_query_pool, vk.base, VkQueryPool,
|
||||
VK_OBJECT_TYPE_QUERY_POOL)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_shader_module, base, VkShaderModule,
|
||||
VK_OBJECT_TYPE_SHADER_MODULE)
|
||||
VK_DEFINE_NONDISP_HANDLE_CASTS(gfxstream_vk_descriptor_update_template, base,
|
||||
VkDescriptorUpdateTemplate,
|
||||
VK_OBJECT_TYPE_DESCRIPTOR_UPDATE_TEMPLATE)
|
||||
|
||||
VkResult gfxstream_vk_wsi_init(struct gfxstream_vk_physical_device* physical_device);
|
||||
|
||||
void gfxstream_vk_wsi_finish(struct gfxstream_vk_physical_device* physical_device);
|
||||
|
||||
std::vector<VkSemaphore> transformVkSemaphoreList(const VkSemaphore* pSemaphores,
|
||||
uint32_t semaphoreCount);
|
||||
|
||||
std::vector<VkSemaphoreSubmitInfo> transformVkSemaphoreSubmitInfoList(
|
||||
const VkSemaphoreSubmitInfo* pSemaphoreSubmitInfos, uint32_t semaphoreSubmitInfoCount);
|
||||
|
||||
#endif /* GFXSTREAM_VK_PRIVATE_H */
|
||||
|
|
@ -23,16 +23,16 @@
|
|||
|
||||
#ifdef __cplusplus
|
||||
|
||||
template<class T, typename F>
|
||||
template <class T, typename F>
|
||||
bool arrayany(const T* arr, uint32_t begin, uint32_t end, const F& func) {
|
||||
const T* e = arr + end;
|
||||
return std::find_if(arr + begin, e, func) != e;
|
||||
}
|
||||
|
||||
#define DEFINE_ALIAS_FUNCTION(ORIGINAL_FN, ALIAS_FN) \
|
||||
template <typename... Args> \
|
||||
inline auto ALIAS_FN(Args&&... args) -> decltype(ORIGINAL_FN(std::forward<Args>(args)...)) { \
|
||||
return ORIGINAL_FN(std::forward<Args>(args)...); \
|
||||
}
|
||||
#define DEFINE_ALIAS_FUNCTION(ORIGINAL_FN, ALIAS_FN) \
|
||||
template <typename... Args> \
|
||||
inline auto ALIAS_FN(Args&&... args) -> decltype(ORIGINAL_FN(std::forward<Args>(args)...)) { \
|
||||
return ORIGINAL_FN(std::forward<Args>(args)...); \
|
||||
}
|
||||
|
||||
#endif
|
||||
|
|
|
|||
|
|
@ -1,6 +1,17 @@
|
|||
# Copyright 2022 Android Open Source Project
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
gfxstream_vk_entrypoints = custom_target(
|
||||
'gfxstream_vk_entrypoints',
|
||||
input : [vk_entrypoints_gen, vk_api_xml],
|
||||
output : ['gfxstream_vk_entrypoints.h', 'gfxstream_vk_entrypoints.c'],
|
||||
command : [
|
||||
prog_python, '@INPUT0@', '--xml', '@INPUT1@', '--proto', '--weak',
|
||||
'--out-h', '@OUTPUT0@', '--out-c', '@OUTPUT1@', '--prefix', 'gfxstream_vk',
|
||||
'--beta', with_vulkan_beta.to_string()
|
||||
],
|
||||
)
|
||||
|
||||
files_lib_vulkan_enc = files(
|
||||
'CommandBufferStagingStream.cpp',
|
||||
'DescriptorSetVirtualization.cpp',
|
||||
|
|
@ -19,16 +30,5 @@ files_lib_vulkan_enc = files(
|
|||
'goldfish_vk_marshaling_guest.cpp',
|
||||
'goldfish_vk_reserved_marshaling_guest.cpp',
|
||||
'goldfish_vk_transform_guest.cpp',
|
||||
)
|
||||
|
||||
lib_vulkan_enc = static_library(
|
||||
'vulkan_enc',
|
||||
files_lib_vulkan_enc,
|
||||
cpp_args: cpp_args,
|
||||
include_directories: [inc_android_emu, inc_guest_iostream, inc_android_compat,
|
||||
inc_opengl_codec, inc_render_enc, inc_system,
|
||||
inc_goldfish_address_space, inc_platform,
|
||||
inc_vulkan_headers, inc_opengl_headers, inc_codec_common],
|
||||
link_with: [lib_platform],
|
||||
dependencies: dependency('libdrm'),
|
||||
'gfxstream_vk_private.cpp',
|
||||
)
|
||||
|
|
|
|||
|
|
@ -35,8 +35,8 @@ enum {
|
|||
HAL_PIXEL_FORMAT_YV12 = 842094169,
|
||||
};
|
||||
#endif
|
||||
#include <vulkan/vulkan.h>
|
||||
#include <vndk/hardware_buffer.h>
|
||||
#include <vulkan/vulkan.h>
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
|
|
@ -55,165 +55,149 @@ namespace vk {
|
|||
// formats such as AHARDWAREBUFFER_FORMAT_Y8Cb8Cr8_420 could be
|
||||
// either VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM or
|
||||
// VK_FORMAT_G8_B8R8_2PLANE_420_UNORM.
|
||||
static inline VkFormat
|
||||
vk_format_from_android(unsigned android_format)
|
||||
{
|
||||
switch (android_format) {
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM:
|
||||
return VK_FORMAT_R8G8B8A8_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8X8_UNORM:
|
||||
return VK_FORMAT_R8G8B8A8_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM:
|
||||
return VK_FORMAT_R8G8B8_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM:
|
||||
return VK_FORMAT_R5G6B5_UNORM_PACK16;
|
||||
case AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT:
|
||||
return VK_FORMAT_R16G16B16A16_SFLOAT;
|
||||
case AHARDWAREBUFFER_FORMAT_R10G10B10A2_UNORM:
|
||||
return VK_FORMAT_A2B10G10R10_UNORM_PACK32;
|
||||
case HAL_PIXEL_FORMAT_NV12_Y_TILED_INTEL:
|
||||
case AHARDWAREBUFFER_FORMAT_Y8Cb8Cr8_420:
|
||||
return VK_FORMAT_G8_B8R8_2PLANE_420_UNORM;
|
||||
static inline VkFormat vk_format_from_android(unsigned android_format) {
|
||||
switch (android_format) {
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM:
|
||||
return VK_FORMAT_R8G8B8A8_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8X8_UNORM:
|
||||
return VK_FORMAT_R8G8B8A8_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM:
|
||||
return VK_FORMAT_R8G8B8_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM:
|
||||
return VK_FORMAT_R5G6B5_UNORM_PACK16;
|
||||
case AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT:
|
||||
return VK_FORMAT_R16G16B16A16_SFLOAT;
|
||||
case AHARDWAREBUFFER_FORMAT_R10G10B10A2_UNORM:
|
||||
return VK_FORMAT_A2B10G10R10_UNORM_PACK32;
|
||||
case HAL_PIXEL_FORMAT_NV12_Y_TILED_INTEL:
|
||||
case AHARDWAREBUFFER_FORMAT_Y8Cb8Cr8_420:
|
||||
return VK_FORMAT_G8_B8R8_2PLANE_420_UNORM;
|
||||
#if __ANDROID_API__ >= 30
|
||||
case AHARDWAREBUFFER_FORMAT_YCbCr_P010:
|
||||
return VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16;
|
||||
case AHARDWAREBUFFER_FORMAT_YCbCr_P010:
|
||||
return VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16;
|
||||
#endif
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
case HAL_PIXEL_FORMAT_YV12:
|
||||
case OMX_COLOR_FormatYUV420Planar:
|
||||
return VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_BLOB:
|
||||
case HAL_PIXEL_FORMAT_YV12:
|
||||
case OMX_COLOR_FormatYUV420Planar:
|
||||
return VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM;
|
||||
case AHARDWAREBUFFER_FORMAT_BLOB:
|
||||
#endif
|
||||
default:
|
||||
return VK_FORMAT_UNDEFINED;
|
||||
}
|
||||
default:
|
||||
return VK_FORMAT_UNDEFINED;
|
||||
}
|
||||
}
|
||||
|
||||
static inline unsigned
|
||||
android_format_from_vk(VkFormat vk_format)
|
||||
{
|
||||
switch (vk_format) {
|
||||
case VK_FORMAT_R8G8B8A8_UNORM:
|
||||
return AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM;
|
||||
case VK_FORMAT_R8G8B8_UNORM:
|
||||
return AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM;
|
||||
case VK_FORMAT_R5G6B5_UNORM_PACK16:
|
||||
return AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM;
|
||||
case VK_FORMAT_R16G16B16A16_SFLOAT:
|
||||
return AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT;
|
||||
case VK_FORMAT_A2B10G10R10_UNORM_PACK32:
|
||||
return AHARDWAREBUFFER_FORMAT_R10G10B10A2_UNORM;
|
||||
case VK_FORMAT_G8_B8R8_2PLANE_420_UNORM:
|
||||
return HAL_PIXEL_FORMAT_NV12_Y_TILED_INTEL;
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM:
|
||||
return HAL_PIXEL_FORMAT_YV12;
|
||||
default:
|
||||
return AHARDWAREBUFFER_FORMAT_BLOB;
|
||||
}
|
||||
static inline unsigned android_format_from_vk(VkFormat vk_format) {
|
||||
switch (vk_format) {
|
||||
case VK_FORMAT_R8G8B8A8_UNORM:
|
||||
return AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM;
|
||||
case VK_FORMAT_R8G8B8_UNORM:
|
||||
return AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM;
|
||||
case VK_FORMAT_R5G6B5_UNORM_PACK16:
|
||||
return AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM;
|
||||
case VK_FORMAT_R16G16B16A16_SFLOAT:
|
||||
return AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT;
|
||||
case VK_FORMAT_A2B10G10R10_UNORM_PACK32:
|
||||
return AHARDWAREBUFFER_FORMAT_R10G10B10A2_UNORM;
|
||||
case VK_FORMAT_G8_B8R8_2PLANE_420_UNORM:
|
||||
return HAL_PIXEL_FORMAT_NV12_Y_TILED_INTEL;
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM:
|
||||
return HAL_PIXEL_FORMAT_YV12;
|
||||
default:
|
||||
return AHARDWAREBUFFER_FORMAT_BLOB;
|
||||
}
|
||||
}
|
||||
|
||||
static inline bool
|
||||
android_format_is_yuv(unsigned android_format)
|
||||
{
|
||||
switch (android_format) {
|
||||
case AHARDWAREBUFFER_FORMAT_BLOB:
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8X8_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT:
|
||||
case AHARDWAREBUFFER_FORMAT_R10G10B10A2_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_D16_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_D24_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_D24_UNORM_S8_UINT:
|
||||
case AHARDWAREBUFFER_FORMAT_D32_FLOAT:
|
||||
case AHARDWAREBUFFER_FORMAT_D32_FLOAT_S8_UINT:
|
||||
case AHARDWAREBUFFER_FORMAT_S8_UINT:
|
||||
return false;
|
||||
case HAL_PIXEL_FORMAT_NV12_Y_TILED_INTEL:
|
||||
case OMX_COLOR_FormatYUV420Planar:
|
||||
case HAL_PIXEL_FORMAT_YV12:
|
||||
static inline bool android_format_is_yuv(unsigned android_format) {
|
||||
switch (android_format) {
|
||||
case AHARDWAREBUFFER_FORMAT_BLOB:
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8X8_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT:
|
||||
case AHARDWAREBUFFER_FORMAT_R10G10B10A2_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_D16_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_D24_UNORM:
|
||||
case AHARDWAREBUFFER_FORMAT_D24_UNORM_S8_UINT:
|
||||
case AHARDWAREBUFFER_FORMAT_D32_FLOAT:
|
||||
case AHARDWAREBUFFER_FORMAT_D32_FLOAT_S8_UINT:
|
||||
case AHARDWAREBUFFER_FORMAT_S8_UINT:
|
||||
return false;
|
||||
case HAL_PIXEL_FORMAT_NV12_Y_TILED_INTEL:
|
||||
case OMX_COLOR_FormatYUV420Planar:
|
||||
case HAL_PIXEL_FORMAT_YV12:
|
||||
#if __ANDROID_API__ >= 30
|
||||
case AHARDWAREBUFFER_FORMAT_YCbCr_P010:
|
||||
case AHARDWAREBUFFER_FORMAT_YCbCr_P010:
|
||||
#endif
|
||||
case AHARDWAREBUFFER_FORMAT_Y8Cb8Cr8_420:
|
||||
return true;
|
||||
default:
|
||||
ALOGE("%s: unhandled format: %d", __FUNCTION__, android_format);
|
||||
return false;
|
||||
}
|
||||
case AHARDWAREBUFFER_FORMAT_Y8Cb8Cr8_420:
|
||||
return true;
|
||||
default:
|
||||
ALOGE("%s: unhandled format: %d", __FUNCTION__, android_format);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
static inline VkImageAspectFlags
|
||||
vk_format_aspects(VkFormat format)
|
||||
{
|
||||
switch (format) {
|
||||
case VK_FORMAT_UNDEFINED:
|
||||
return 0;
|
||||
static inline VkImageAspectFlags vk_format_aspects(VkFormat format) {
|
||||
switch (format) {
|
||||
case VK_FORMAT_UNDEFINED:
|
||||
return 0;
|
||||
|
||||
case VK_FORMAT_S8_UINT:
|
||||
return VK_IMAGE_ASPECT_STENCIL_BIT;
|
||||
case VK_FORMAT_S8_UINT:
|
||||
return VK_IMAGE_ASPECT_STENCIL_BIT;
|
||||
|
||||
case VK_FORMAT_D16_UNORM_S8_UINT:
|
||||
case VK_FORMAT_D24_UNORM_S8_UINT:
|
||||
case VK_FORMAT_D32_SFLOAT_S8_UINT:
|
||||
return VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT;
|
||||
case VK_FORMAT_D16_UNORM_S8_UINT:
|
||||
case VK_FORMAT_D24_UNORM_S8_UINT:
|
||||
case VK_FORMAT_D32_SFLOAT_S8_UINT:
|
||||
return VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT;
|
||||
|
||||
case VK_FORMAT_D16_UNORM:
|
||||
case VK_FORMAT_X8_D24_UNORM_PACK32:
|
||||
case VK_FORMAT_D32_SFLOAT:
|
||||
return VK_IMAGE_ASPECT_DEPTH_BIT;
|
||||
case VK_FORMAT_D16_UNORM:
|
||||
case VK_FORMAT_X8_D24_UNORM_PACK32:
|
||||
case VK_FORMAT_D32_SFLOAT:
|
||||
return VK_IMAGE_ASPECT_DEPTH_BIT;
|
||||
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM:
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_422_UNORM:
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_444_UNORM:
|
||||
case VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_444_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_444_UNORM_3PACK16:
|
||||
case VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM:
|
||||
case VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM:
|
||||
case VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM:
|
||||
return (VK_IMAGE_ASPECT_PLANE_0_BIT |
|
||||
VK_IMAGE_ASPECT_PLANE_1_BIT |
|
||||
VK_IMAGE_ASPECT_PLANE_2_BIT);
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM:
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_422_UNORM:
|
||||
case VK_FORMAT_G8_B8_R8_3PLANE_444_UNORM:
|
||||
case VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_444_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_444_UNORM_3PACK16:
|
||||
case VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM:
|
||||
case VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM:
|
||||
case VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM:
|
||||
return (VK_IMAGE_ASPECT_PLANE_0_BIT | VK_IMAGE_ASPECT_PLANE_1_BIT |
|
||||
VK_IMAGE_ASPECT_PLANE_2_BIT);
|
||||
|
||||
case VK_FORMAT_G8_B8R8_2PLANE_420_UNORM:
|
||||
case VK_FORMAT_G8_B8R8_2PLANE_422_UNORM:
|
||||
case VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G10X6_B10X6R10X6_2PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4R12X4_2PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4R12X4_2PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G16_B16R16_2PLANE_420_UNORM:
|
||||
case VK_FORMAT_G16_B16R16_2PLANE_422_UNORM:
|
||||
return (VK_IMAGE_ASPECT_PLANE_0_BIT |
|
||||
VK_IMAGE_ASPECT_PLANE_1_BIT);
|
||||
case VK_FORMAT_G8_B8R8_2PLANE_420_UNORM:
|
||||
case VK_FORMAT_G8_B8R8_2PLANE_422_UNORM:
|
||||
case VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G10X6_B10X6R10X6_2PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4R12X4_2PLANE_420_UNORM_3PACK16:
|
||||
case VK_FORMAT_G12X4_B12X4R12X4_2PLANE_422_UNORM_3PACK16:
|
||||
case VK_FORMAT_G16_B16R16_2PLANE_420_UNORM:
|
||||
case VK_FORMAT_G16_B16R16_2PLANE_422_UNORM:
|
||||
return (VK_IMAGE_ASPECT_PLANE_0_BIT | VK_IMAGE_ASPECT_PLANE_1_BIT);
|
||||
|
||||
default:
|
||||
return VK_IMAGE_ASPECT_COLOR_BIT;
|
||||
}
|
||||
default:
|
||||
return VK_IMAGE_ASPECT_COLOR_BIT;
|
||||
}
|
||||
}
|
||||
|
||||
static inline bool
|
||||
vk_format_is_color(VkFormat format)
|
||||
{
|
||||
return vk_format_aspects(format) == VK_IMAGE_ASPECT_COLOR_BIT;
|
||||
static inline bool vk_format_is_color(VkFormat format) {
|
||||
return vk_format_aspects(format) == VK_IMAGE_ASPECT_COLOR_BIT;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
vk_format_is_depth_or_stencil(VkFormat format)
|
||||
{
|
||||
const VkImageAspectFlags aspects = vk_format_aspects(format);
|
||||
return aspects & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT);
|
||||
static inline bool vk_format_is_depth_or_stencil(VkFormat format) {
|
||||
const VkImageAspectFlags aspects = vk_format_aspects(format);
|
||||
return aspects & (VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT);
|
||||
}
|
||||
|
||||
static inline bool
|
||||
vk_format_has_depth(VkFormat format)
|
||||
{
|
||||
const VkImageAspectFlags aspects = vk_format_aspects(format);
|
||||
return aspects & VK_IMAGE_ASPECT_DEPTH_BIT;
|
||||
static inline bool vk_format_has_depth(VkFormat format) {
|
||||
const VkImageAspectFlags aspects = vk_format_aspects(format);
|
||||
return aspects & VK_IMAGE_ASPECT_DEPTH_BIT;
|
||||
}
|
||||
|
||||
} // namespace vk
|
||||
|
|
|
|||
|
|
@ -18,13 +18,13 @@
|
|||
#if VK_HEADER_VERSION < 76
|
||||
|
||||
typedef struct VkBaseOutStructure {
|
||||
VkStructureType sType;
|
||||
struct VkBaseOutStructure* pNext;
|
||||
VkStructureType sType;
|
||||
struct VkBaseOutStructure* pNext;
|
||||
} VkBaseOutStructure;
|
||||
|
||||
typedef struct VkBaseInStructure {
|
||||
VkStructureType sType;
|
||||
const struct VkBaseInStructure* pNext;
|
||||
VkStructureType sType;
|
||||
const struct VkBaseInStructure* pNext;
|
||||
} VkBaseInStructure;
|
||||
|
||||
#endif // VK_HEADER_VERSION < 76
|
||||
#endif // VK_HEADER_VERSION < 76
|
||||
|
|
|
|||
|
|
@ -20,18 +20,25 @@
|
|||
#include "vk_android_native_buffer_gfxstream.h"
|
||||
#include "vulkan_gfxstream.h"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
// anonymous
|
||||
namespace {
|
||||
|
||||
template <class T> struct vk_get_vk_struct_id;
|
||||
template <class T>
|
||||
struct vk_get_vk_struct_id;
|
||||
|
||||
#define REGISTER_VK_STRUCT_ID(T, ID) \
|
||||
template <> struct vk_get_vk_struct_id<T> { static constexpr VkStructureType id = ID; }
|
||||
#define REGISTER_VK_STRUCT_ID(T, ID) \
|
||||
template <> \
|
||||
struct vk_get_vk_struct_id<T> { \
|
||||
static constexpr VkStructureType id = ID; \
|
||||
}
|
||||
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
REGISTER_VK_STRUCT_ID(VkAndroidHardwareBufferPropertiesANDROID, VK_STRUCTURE_TYPE_ANDROID_HARDWARE_BUFFER_PROPERTIES_ANDROID);
|
||||
REGISTER_VK_STRUCT_ID(VkAndroidHardwareBufferFormatPropertiesANDROID, VK_STRUCTURE_TYPE_ANDROID_HARDWARE_BUFFER_FORMAT_PROPERTIES_ANDROID);
|
||||
REGISTER_VK_STRUCT_ID(VkAndroidHardwareBufferUsageANDROID, VK_STRUCTURE_TYPE_ANDROID_HARDWARE_BUFFER_USAGE_ANDROID);
|
||||
REGISTER_VK_STRUCT_ID(VkAndroidHardwareBufferPropertiesANDROID,
|
||||
VK_STRUCTURE_TYPE_ANDROID_HARDWARE_BUFFER_PROPERTIES_ANDROID);
|
||||
REGISTER_VK_STRUCT_ID(VkAndroidHardwareBufferFormatPropertiesANDROID,
|
||||
VK_STRUCTURE_TYPE_ANDROID_HARDWARE_BUFFER_FORMAT_PROPERTIES_ANDROID);
|
||||
REGISTER_VK_STRUCT_ID(VkAndroidHardwareBufferUsageANDROID,
|
||||
VK_STRUCTURE_TYPE_ANDROID_HARDWARE_BUFFER_USAGE_ANDROID);
|
||||
#endif
|
||||
REGISTER_VK_STRUCT_ID(VkBufferCreateInfo, VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkImageCreateInfo, VK_STRUCTURE_TYPE_SAMPLER_YCBCR_CONVERSION_CREATE_INFO);
|
||||
|
|
@ -40,52 +47,79 @@ REGISTER_VK_STRUCT_ID(VkImageFormatProperties2, VK_STRUCTURE_TYPE_IMAGE_FORMAT_P
|
|||
REGISTER_VK_STRUCT_ID(VkNativeBufferANDROID, VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID);
|
||||
REGISTER_VK_STRUCT_ID(VkExternalFormatANDROID, VK_STRUCTURE_TYPE_EXTERNAL_FORMAT_ANDROID);
|
||||
#endif
|
||||
REGISTER_VK_STRUCT_ID(VkExternalMemoryBufferCreateInfo, VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_BUFFER_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkExternalMemoryImageCreateInfo, VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_IMAGE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkExternalMemoryBufferCreateInfo,
|
||||
VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_BUFFER_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkExternalMemoryImageCreateInfo,
|
||||
VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_IMAGE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryAllocateInfo, VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryDedicatedAllocateInfo, VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryDedicatedRequirements, VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryDedicatedAllocateInfo,
|
||||
VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryDedicatedRequirements,
|
||||
VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS);
|
||||
#ifdef VK_USE_PLATFORM_ANDROID_KHR
|
||||
REGISTER_VK_STRUCT_ID(VkImportAndroidHardwareBufferInfoANDROID, VK_STRUCTURE_TYPE_IMPORT_ANDROID_HARDWARE_BUFFER_INFO_ANDROID);
|
||||
REGISTER_VK_STRUCT_ID(VkImportAndroidHardwareBufferInfoANDROID,
|
||||
VK_STRUCTURE_TYPE_IMPORT_ANDROID_HARDWARE_BUFFER_INFO_ANDROID);
|
||||
#endif
|
||||
REGISTER_VK_STRUCT_ID(VkImportMemoryFdInfoKHR, VK_STRUCTURE_TYPE_IMPORT_MEMORY_FD_INFO_KHR);
|
||||
REGISTER_VK_STRUCT_ID(VkExportMemoryAllocateInfo, VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryRequirements2, VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2);
|
||||
REGISTER_VK_STRUCT_ID(VkSemaphoreCreateInfo, VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkExportSemaphoreCreateInfoKHR, VK_STRUCTURE_TYPE_EXPORT_SEMAPHORE_CREATE_INFO_KHR);
|
||||
REGISTER_VK_STRUCT_ID(VkSamplerYcbcrConversionCreateInfo, VK_STRUCTURE_TYPE_SAMPLER_YCBCR_CONVERSION_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkExportSemaphoreCreateInfoKHR,
|
||||
VK_STRUCTURE_TYPE_EXPORT_SEMAPHORE_CREATE_INFO_KHR);
|
||||
REGISTER_VK_STRUCT_ID(VkSamplerYcbcrConversionCreateInfo,
|
||||
VK_STRUCTURE_TYPE_SAMPLER_YCBCR_CONVERSION_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkImportColorBufferGOOGLE, VK_STRUCTURE_TYPE_IMPORT_COLOR_BUFFER_GOOGLE);
|
||||
REGISTER_VK_STRUCT_ID(VkImageViewCreateInfo, VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO);
|
||||
#ifdef VK_USE_PLATFORM_FUCHSIA
|
||||
REGISTER_VK_STRUCT_ID(VkImportMemoryBufferCollectionFUCHSIA, VK_STRUCTURE_TYPE_IMPORT_MEMORY_BUFFER_COLLECTION_FUCHSIA);
|
||||
REGISTER_VK_STRUCT_ID(VkImportMemoryZirconHandleInfoFUCHSIA, VK_STRUCTURE_TYPE_IMPORT_MEMORY_ZIRCON_HANDLE_INFO_FUCHSIA);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferCollectionImageCreateInfoFUCHSIA, VK_STRUCTURE_TYPE_BUFFER_COLLECTION_IMAGE_CREATE_INFO_FUCHSIA);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferCollectionBufferCreateInfoFUCHSIA, VK_STRUCTURE_TYPE_BUFFER_COLLECTION_BUFFER_CREATE_INFO_FUCHSIA);
|
||||
REGISTER_VK_STRUCT_ID(VkImportMemoryBufferCollectionFUCHSIA,
|
||||
VK_STRUCTURE_TYPE_IMPORT_MEMORY_BUFFER_COLLECTION_FUCHSIA);
|
||||
REGISTER_VK_STRUCT_ID(VkImportMemoryZirconHandleInfoFUCHSIA,
|
||||
VK_STRUCTURE_TYPE_IMPORT_MEMORY_ZIRCON_HANDLE_INFO_FUCHSIA);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferCollectionImageCreateInfoFUCHSIA,
|
||||
VK_STRUCTURE_TYPE_BUFFER_COLLECTION_IMAGE_CREATE_INFO_FUCHSIA);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferCollectionBufferCreateInfoFUCHSIA,
|
||||
VK_STRUCTURE_TYPE_BUFFER_COLLECTION_BUFFER_CREATE_INFO_FUCHSIA);
|
||||
#endif // VK_USE_PLATFORM_FUCHSIA
|
||||
REGISTER_VK_STRUCT_ID(VkSamplerCreateInfo, VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkSamplerCustomBorderColorCreateInfoEXT, VK_STRUCTURE_TYPE_SAMPLER_CUSTOM_BORDER_COLOR_CREATE_INFO_EXT);
|
||||
REGISTER_VK_STRUCT_ID(VkSamplerYcbcrConversionInfo, VK_STRUCTURE_TYPE_SAMPLER_YCBCR_CONVERSION_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkSamplerCustomBorderColorCreateInfoEXT,
|
||||
VK_STRUCTURE_TYPE_SAMPLER_CUSTOM_BORDER_COLOR_CREATE_INFO_EXT);
|
||||
REGISTER_VK_STRUCT_ID(VkSamplerYcbcrConversionInfo,
|
||||
VK_STRUCTURE_TYPE_SAMPLER_YCBCR_CONVERSION_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkFenceCreateInfo, VK_STRUCTURE_TYPE_FENCE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkExportFenceCreateInfo, VK_STRUCTURE_TYPE_EXPORT_FENCE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkImportBufferGOOGLE, VK_STRUCTURE_TYPE_IMPORT_BUFFER_GOOGLE);
|
||||
REGISTER_VK_STRUCT_ID(VkCreateBlobGOOGLE, VK_STRUCTURE_TYPE_CREATE_BLOB_GOOGLE);
|
||||
REGISTER_VK_STRUCT_ID(VkExternalImageFormatProperties, VK_STRUCTURE_TYPE_EXTERNAL_IMAGE_FORMAT_PROPERTIES);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceImageFormatInfo2, VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_IMAGE_FORMAT_INFO_2);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceExternalImageFormatInfo, VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_EXTERNAL_IMAGE_FORMAT_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkExternalImageFormatProperties,
|
||||
VK_STRUCTURE_TYPE_EXTERNAL_IMAGE_FORMAT_PROPERTIES);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceImageFormatInfo2,
|
||||
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_IMAGE_FORMAT_INFO_2);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceExternalImageFormatInfo,
|
||||
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_EXTERNAL_IMAGE_FORMAT_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkSemaphoreTypeCreateInfo, VK_STRUCTURE_TYPE_SEMAPHORE_TYPE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceFeatures2, VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_FEATURES_2);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceProperties2, VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_PROPERTIES_2);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceDeviceMemoryReportFeaturesEXT, VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_DEVICE_MEMORY_REPORT_FEATURES_EXT);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceDeviceMemoryReportFeaturesEXT,
|
||||
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_DEVICE_MEMORY_REPORT_FEATURES_EXT);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryAllocateFlagsInfo, VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryOpaqueCaptureAddressAllocateInfo, VK_STRUCTURE_TYPE_MEMORY_OPAQUE_CAPTURE_ADDRESS_ALLOCATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkMemoryOpaqueCaptureAddressAllocateInfo,
|
||||
VK_STRUCTURE_TYPE_MEMORY_OPAQUE_CAPTURE_ADDRESS_ALLOCATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkBindImageMemoryInfo, VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkBindImageMemorySwapchainInfoKHR, VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_SWAPCHAIN_INFO_KHR);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferOpaqueCaptureAddressCreateInfo, VK_STRUCTURE_TYPE_BUFFER_OPAQUE_CAPTURE_ADDRESS_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferDeviceAddressCreateInfoEXT, VK_STRUCTURE_TYPE_BUFFER_DEVICE_ADDRESS_CREATE_INFO_EXT);
|
||||
REGISTER_VK_STRUCT_ID(VkGraphicsPipelineCreateInfo, VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkPipelineRenderingCreateInfo, VK_STRUCTURE_TYPE_PIPELINE_RENDERING_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceExternalSemaphoreInfo, VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_EXTERNAL_SEMAPHORE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkBindImageMemorySwapchainInfoKHR,
|
||||
VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_SWAPCHAIN_INFO_KHR);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferOpaqueCaptureAddressCreateInfo,
|
||||
VK_STRUCTURE_TYPE_BUFFER_OPAQUE_CAPTURE_ADDRESS_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkBufferDeviceAddressCreateInfoEXT,
|
||||
VK_STRUCTURE_TYPE_BUFFER_DEVICE_ADDRESS_CREATE_INFO_EXT);
|
||||
REGISTER_VK_STRUCT_ID(VkGraphicsPipelineCreateInfo,
|
||||
VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkPipelineRenderingCreateInfo,
|
||||
VK_STRUCTURE_TYPE_PIPELINE_RENDERING_CREATE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkPhysicalDeviceExternalSemaphoreInfo,
|
||||
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_EXTERNAL_SEMAPHORE_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkRenderPassBeginInfo, VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO);
|
||||
REGISTER_VK_STRUCT_ID(VkRenderPassAttachmentBeginInfo,
|
||||
VK_STRUCTURE_TYPE_RENDER_PASS_ATTACHMENT_BEGIN_INFO);
|
||||
|
||||
#undef REGISTER_VK_STRUCT_ID
|
||||
|
||||
} // namespace vk
|
||||
} // namespace gfxstream
|
||||
} // namespace
|
||||
|
|
|
|||
|
|
@ -25,29 +25,29 @@
|
|||
|
||||
/* common inlines and macros for vulkan drivers */
|
||||
|
||||
#include <vulkan/vulkan.h>
|
||||
#include <stdlib.h>
|
||||
#include <vulkan/vulkan.h>
|
||||
|
||||
#include "vk_struct_id.h"
|
||||
|
||||
namespace gfxstream {
|
||||
namespace vk {
|
||||
namespace { // anonymous
|
||||
|
||||
struct vk_struct_common {
|
||||
VkStructureType sType;
|
||||
struct vk_struct_common *pNext;
|
||||
struct vk_struct_common* pNext;
|
||||
};
|
||||
|
||||
struct vk_struct_chain_iterator {
|
||||
vk_struct_common* value;
|
||||
};
|
||||
|
||||
#define vk_foreach_struct(__iter, __start) \
|
||||
for (struct vk_struct_common *__iter = (struct vk_struct_common *)(__start); \
|
||||
__iter; __iter = __iter->pNext)
|
||||
#define vk_foreach_struct(__iter, __start) \
|
||||
for (struct vk_struct_common* __iter = (struct vk_struct_common*)(__start); __iter; \
|
||||
__iter = __iter->pNext)
|
||||
|
||||
#define vk_foreach_struct_const(__iter, __start) \
|
||||
for (const struct vk_struct_common *__iter = (const struct vk_struct_common *)(__start); \
|
||||
__iter; __iter = __iter->pNext)
|
||||
#define vk_foreach_struct_const(__iter, __start) \
|
||||
for (const struct vk_struct_common* __iter = (const struct vk_struct_common*)(__start); \
|
||||
__iter; __iter = __iter->pNext)
|
||||
|
||||
/**
|
||||
* A wrapper for a Vulkan output array. A Vulkan output array is one that
|
||||
|
|
@ -79,92 +79,84 @@ struct vk_struct_chain_iterator {
|
|||
* }
|
||||
*/
|
||||
struct __vk_outarray {
|
||||
/** May be null. */
|
||||
void *data;
|
||||
/** May be null. */
|
||||
void* data;
|
||||
|
||||
/**
|
||||
* Capacity, in number of elements. Capacity is unlimited (UINT32_MAX) if
|
||||
* data is null.
|
||||
*/
|
||||
uint32_t cap;
|
||||
/**
|
||||
* Capacity, in number of elements. Capacity is unlimited (UINT32_MAX) if
|
||||
* data is null.
|
||||
*/
|
||||
uint32_t cap;
|
||||
|
||||
/**
|
||||
* Count of elements successfully written to the array. Every write is
|
||||
* considered successful if data is null.
|
||||
*/
|
||||
uint32_t *filled_len;
|
||||
/**
|
||||
* Count of elements successfully written to the array. Every write is
|
||||
* considered successful if data is null.
|
||||
*/
|
||||
uint32_t* filled_len;
|
||||
|
||||
/**
|
||||
* Count of elements that would have been written to the array if its
|
||||
* capacity were sufficient. Vulkan functions often return VK_INCOMPLETE
|
||||
* when `*filled_len < wanted_len`.
|
||||
*/
|
||||
uint32_t wanted_len;
|
||||
/**
|
||||
* Count of elements that would have been written to the array if its
|
||||
* capacity were sufficient. Vulkan functions often return VK_INCOMPLETE
|
||||
* when `*filled_len < wanted_len`.
|
||||
*/
|
||||
uint32_t wanted_len;
|
||||
};
|
||||
|
||||
static inline void
|
||||
__vk_outarray_init(struct __vk_outarray *a,
|
||||
void *data, uint32_t * len)
|
||||
{
|
||||
a->data = data;
|
||||
a->cap = *len;
|
||||
a->filled_len = len;
|
||||
*a->filled_len = 0;
|
||||
a->wanted_len = 0;
|
||||
static inline void __vk_outarray_init(struct __vk_outarray* a, void* data, uint32_t* len) {
|
||||
a->data = data;
|
||||
a->cap = *len;
|
||||
a->filled_len = len;
|
||||
*a->filled_len = 0;
|
||||
a->wanted_len = 0;
|
||||
|
||||
if (a->data == NULL)
|
||||
a->cap = UINT32_MAX;
|
||||
if (a->data == NULL) a->cap = UINT32_MAX;
|
||||
}
|
||||
|
||||
static inline VkResult
|
||||
__vk_outarray_status(const struct __vk_outarray *a)
|
||||
{
|
||||
if (*a->filled_len < a->wanted_len)
|
||||
return VK_INCOMPLETE;
|
||||
else
|
||||
return VK_SUCCESS;
|
||||
static inline VkResult __vk_outarray_status(const struct __vk_outarray* a) {
|
||||
if (*a->filled_len < a->wanted_len)
|
||||
return VK_INCOMPLETE;
|
||||
else
|
||||
return VK_SUCCESS;
|
||||
}
|
||||
|
||||
static inline void *
|
||||
__vk_outarray_next(struct __vk_outarray *a, size_t elem_size)
|
||||
{
|
||||
void *p = NULL;
|
||||
static inline void* __vk_outarray_next(struct __vk_outarray* a, size_t elem_size) {
|
||||
void* p = NULL;
|
||||
|
||||
a->wanted_len += 1;
|
||||
a->wanted_len += 1;
|
||||
|
||||
if (*a->filled_len >= a->cap)
|
||||
return NULL;
|
||||
if (*a->filled_len >= a->cap) return NULL;
|
||||
|
||||
if (a->data != NULL)
|
||||
p = ((uint8_t*)a->data) + (*a->filled_len) * elem_size;
|
||||
if (a->data != NULL) p = ((uint8_t*)a->data) + (*a->filled_len) * elem_size;
|
||||
|
||||
*a->filled_len += 1;
|
||||
*a->filled_len += 1;
|
||||
|
||||
return p;
|
||||
return p;
|
||||
}
|
||||
|
||||
#define vk_outarray(elem_t) \
|
||||
struct { \
|
||||
struct __vk_outarray base; \
|
||||
elem_t meta[]; \
|
||||
}
|
||||
#define vk_outarray(elem_t) \
|
||||
struct { \
|
||||
struct __vk_outarray base; \
|
||||
elem_t meta[]; \
|
||||
}
|
||||
|
||||
#define vk_outarray_typeof_elem(a) __typeof__((a)->meta[0])
|
||||
#define vk_outarray_sizeof_elem(a) sizeof((a)->meta[0])
|
||||
|
||||
#define vk_outarray_init(a, data, len) \
|
||||
__vk_outarray_init(&(a)->base, (data), (len))
|
||||
#define vk_outarray_init(a, data, len) __vk_outarray_init(&(a)->base, (data), (len))
|
||||
|
||||
#define VK_OUTARRAY_MAKE(name, data, len) \
|
||||
vk_outarray(__typeof__((data)[0])) name; \
|
||||
vk_outarray_init(&name, (data), (len))
|
||||
#define VK_OUTARRAY_MAKE(name, data, len) \
|
||||
vk_outarray(__typeof__((data)[0])) name; \
|
||||
vk_outarray_init(&name, (data), (len))
|
||||
|
||||
#define vk_outarray_status(a) \
|
||||
__vk_outarray_status(&(a)->base)
|
||||
#define VK_OUTARRAY_MAKE_TYPED(type, name, data, len) \
|
||||
vk_outarray(type) name; \
|
||||
vk_outarray_init(&name, (data), (len))
|
||||
|
||||
#define vk_outarray_next(a) \
|
||||
((vk_outarray_typeof_elem(a) *) \
|
||||
__vk_outarray_next(&(a)->base, vk_outarray_sizeof_elem(a)))
|
||||
#define vk_outarray_status(a) __vk_outarray_status(&(a)->base)
|
||||
|
||||
#define vk_outarray_next(a) vk_outarray_next_typed(vk_outarray_typeof_elem(a), a)
|
||||
#define vk_outarray_next_typed(type, a) \
|
||||
((type*)__vk_outarray_next(&(a)->base, vk_outarray_sizeof_elem(a)))
|
||||
|
||||
/**
|
||||
* Append to a Vulkan output array.
|
||||
|
|
@ -186,31 +178,30 @@ __vk_outarray_next(struct __vk_outarray *a, size_t elem_size)
|
|||
* points to the newly appended element.
|
||||
*/
|
||||
#define vk_outarray_append(a, elem) \
|
||||
for (vk_outarray_typeof_elem(a) *elem = vk_outarray_next(a); \
|
||||
elem != NULL; elem = NULL)
|
||||
for (vk_outarray_typeof_elem(a)* elem = vk_outarray_next(a); elem != NULL; elem = NULL)
|
||||
|
||||
static inline void *
|
||||
__vk_find_struct(void *start, VkStructureType sType)
|
||||
{
|
||||
vk_foreach_struct(s, start) {
|
||||
if (s->sType == sType)
|
||||
return s;
|
||||
}
|
||||
#define vk_outarray_append_typed(type, a, elem) \
|
||||
for (type* elem = vk_outarray_next_typed(type, a); elem != NULL; elem = NULL)
|
||||
|
||||
return NULL;
|
||||
static inline void* __vk_find_struct(void* start, VkStructureType sType) {
|
||||
vk_foreach_struct(s, start) {
|
||||
if (s->sType == sType) return s;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
template <class T, class H> T* vk_find_struct(H* head)
|
||||
{
|
||||
template <class T, class H>
|
||||
T* vk_find_struct(H* head) {
|
||||
(void)vk_get_vk_struct_id<H>::id;
|
||||
return static_cast<T*>(__vk_find_struct(static_cast<void*>(head), vk_get_vk_struct_id<T>::id));
|
||||
}
|
||||
|
||||
template <class T, class H> const T* vk_find_struct(const H* head)
|
||||
{
|
||||
template <class T, class H>
|
||||
const T* vk_find_struct(const H* head) {
|
||||
(void)vk_get_vk_struct_id<H>::id;
|
||||
return static_cast<const T*>(__vk_find_struct(const_cast<void*>(static_cast<const void*>(head)),
|
||||
vk_get_vk_struct_id<T>::id));
|
||||
vk_get_vk_struct_id<T>::id));
|
||||
}
|
||||
|
||||
uint32_t vk_get_driver_version(void);
|
||||
|
|
@ -219,25 +210,25 @@ uint32_t vk_get_version_override(void);
|
|||
|
||||
#define VK_EXT_OFFSET (1000000000UL)
|
||||
#define VK_ENUM_EXTENSION(__enum) \
|
||||
((__enum) >= VK_EXT_OFFSET ? ((((__enum) - VK_EXT_OFFSET) / 1000UL) + 1) : 0)
|
||||
#define VK_ENUM_OFFSET(__enum) \
|
||||
((__enum) >= VK_EXT_OFFSET ? ((__enum) % 1000) : (__enum))
|
||||
((__enum) >= VK_EXT_OFFSET ? ((((__enum)-VK_EXT_OFFSET) / 1000UL) + 1) : 0)
|
||||
#define VK_ENUM_OFFSET(__enum) ((__enum) >= VK_EXT_OFFSET ? ((__enum) % 1000) : (__enum))
|
||||
|
||||
template <class T> T vk_make_orphan_copy(const T& vk_struct) {
|
||||
template <class T>
|
||||
T vk_make_orphan_copy(const T& vk_struct) {
|
||||
T copy = vk_struct;
|
||||
copy.pNext = NULL;
|
||||
return copy;
|
||||
}
|
||||
|
||||
template <class T> vk_struct_chain_iterator vk_make_chain_iterator(T* vk_struct)
|
||||
{
|
||||
template <class T>
|
||||
vk_struct_chain_iterator vk_make_chain_iterator(T* vk_struct) {
|
||||
(void)vk_get_vk_struct_id<T>::id;
|
||||
vk_struct_chain_iterator result = { reinterpret_cast<vk_struct_common*>(vk_struct) };
|
||||
vk_struct_chain_iterator result = {reinterpret_cast<vk_struct_common*>(vk_struct)};
|
||||
return result;
|
||||
}
|
||||
|
||||
template <class T> void vk_append_struct(vk_struct_chain_iterator* i, T* vk_struct)
|
||||
{
|
||||
template <class T>
|
||||
void vk_append_struct(vk_struct_chain_iterator* i, T* vk_struct) {
|
||||
(void)vk_get_vk_struct_id<T>::id;
|
||||
|
||||
vk_struct_common* p = i->value;
|
||||
|
|
@ -245,13 +236,12 @@ template <class T> void vk_append_struct(vk_struct_chain_iterator* i, T* vk_stru
|
|||
::abort();
|
||||
}
|
||||
|
||||
p->pNext = reinterpret_cast<vk_struct_common *>(vk_struct);
|
||||
p->pNext = reinterpret_cast<vk_struct_common*>(vk_struct);
|
||||
vk_struct->pNext = NULL;
|
||||
|
||||
*i = vk_make_chain_iterator(vk_struct);
|
||||
}
|
||||
|
||||
} // namespace vk
|
||||
} // namespace gfxstream
|
||||
} // namespace
|
||||
|
||||
#endif /* VK_UTIL_H */
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue