We run into a lot of successive subpasses with the exact same framebuffer configuration which we now exploit to avoid the creation of a new subpass due to the overhead involved with this. This provides significant performance boosts in certain cases due to the magnitude of difference in the amount of subpasses being created while providing next to no benefit in other cases.
The check for the fence cycle being the same as the current cycle was incorrectly inverted to be the opposite of what it should have been, leading to bugs.
The responsibility for synchronizing a texture and locking it is now on the `PresentationEngine` rather than the API-user as this'll allow more fine grained locking and delay waiting until necessary.
As we require a relaxed version of the Vulkan render pass compatibility clause for caching multi-subpass render passes, we now utilize a quirk to determine if this is supported which it is on Nvidia/Adreno while AMD/Mali where it isn't supported we force single-subpass render passes.
We found out that certain vendors such as Nvidia had a limitation on the global priority of a queue and requesting `VK_QUEUE_GLOBAL_PRIORITY_HIGH_EXT` would result in `VK_ERROR_NOT_PERMITTED_EXT`. A quirk has been introduced to supply the maximum supported global priority which is currently set on a per-vendor basis to avoid future crashes.
Implements a cache for storing `VkPipeline` objects which are fairly expensive to create and doing so on a per-frame basis was rather wasteful and consumed a significant part of frametime. It should be noted that this is **not** compliant with the Vulkan specification and **will** break unless the driver supports a relaxed version of the Vulkan specification's Render Pass Compatibility clause.
We can use inline push descriptors for writing to descriptor rather than allocating a descriptor set for a one time write and freeing it as this is rather inefficient while an inline push descriptor generally ends up being a direct `memcpy` on the driver side designed for this use-case.
We want Skyline to have the most favorable GPU scheduling possible due to low latency and high throughput requirements, we request high priority scheduling due to this reason.
This implements all Maxwell3D registers and HLE Vulkan state for Tessellation including invalidation of the TCS (Tessellation Control Shader) state during state changes.
Previously constant buffer updates would be handled on the CPU and only the end result would be synced to the GPU before execute. This caused issues as if the constant buffer contents was changed between each draw in a renderpass (e.g. text rendering) the draws themselves would only see the final resulting constant buffer.
We had earlier tried to fix this by using vkCmdUpdateBuffer however this caused significant performance loss due to an oversight in Adreno drivers. We could have worked around this simply by using vkCmdCopy buffer however there would still be a performance loss due to renderpasses being split up with copies inbetween.
To avoid this we introduce 'megabuffers', a brand new technique not done before in any other switch emulators. Rather than replaying the copies in sequence on the GPU, we take advantage of the fact that buffers are generally small in order to replay buffers on the GPU instead. Each write and subsequent usage of a buffer will cause a copy of the buffer with that write, and all prior applied to be pushed into the megabuffer, this way at the start of execute the megabuffer will hold all used states of the buffer simultaneously. Draws then reference these individual states in sequence to allow everything to work without any copies. In order to support this buffers have been moved to an immediate sync model, with synchronisation being done at usage-time rather than execute (in order to keep contents properly sequenced) and GPU-side writes now need to be explictly marked (since they prevent megabuffering). It should also be noted that a fallback path using cmdCopyBuffer exists for the cases where buffers are too large or GPU dirty.
As bindings weren't correctly handled due to the fact that `EmitSPIRV` would change the bindings, the shader module cache would not correctly function and have no cache hits in `find` and rather have them in `try_emplace` which negated any performance benefit of it. This has now been fixed by retaining the initial cache key for insertion into the cache while also storing the post-emit bindings and restoring them during a cache hit.
Implements caching of the compiled shader module (`VkShaderModule`) in an associative map based on the supplied IR, bindings and runtime state to avoid constant recompilation of shaders. This doesn't entirely address shader compilation as an issue since host shader compilation is tied to Vulkan pipeline objects rather than Vulkan shader modules, they need to be cached to prevent costly host shader recompilation.
This implements the first step of a full shader cache with caching any IR by treating the shared pointer as a handle and key for an associative map alongside hashing the Maxwell shader bytecode, it supports both single shader program and dual vertex program caching.
We desire the ability to hash and check equality of data across spans to use associative containers such as `std::unordered_map` with spans. The implemented functions provide an easy way to do that.
Mostly based off of yuzu's implementation, this will need to be extended in the future to open up a UI for configuring controllers according to the applications requirements.
As there was no check for the lack of a `GuestTexture`/`GuestBuffer`, it would lead to UB when a texture/buffer that had no guest such as the `zeroTexture` from `GraphicsContext` would be marked as dirty they would cause a call to `NCE::RetrapRegions` with a `nullptr` handle that would be dereferenced and cause a segmentation fault.
In certain situations such as constant buffer updates, we desire to use the guest buffer as a shadow buffer forwarding all writes directly to it while we update the host using inline buffer updates so they happen in-sequence. This requires special behavior as we cannot let any synchronization operations take place as they would break the shadow buffer, as a result, an external synchronization flag has been added to prevent this from happening.
It should be noted that this flag is not respected for buffer recreation which will lead to UB, this can and will break updates in certain cases and this change isn't complete without buffer manager support.
The offset of the view wasn't added to the `vkCmdUpdateBuffer`, this would cause the offset to be incorrect given the buffer was a view of a larger buffer that wasn't the start of it. This commit fixes that by adding the offset of the view to the buffer update.
We didn't call `MarkGpuDirty` on textures/buffers prior to GPU usage, this would cause them to not be R/W protected when they should be and provide outdated copies if there were any read accesses from the CPU (which are not possible at the moment since we assume all accesses are writes at the moment). This has now been fixed by calling it after synchronizing the resource.
The terminology "Non-Graphics pass" was deemed to be fairly inaccurate since it simply covered all Vulkan commands (not "passes") outside the render-pass scope, these may be graphical operations such as blits and therefore it is more accurate to use the new terminology of "Outside-RenderPass command" due to the lack of such an implication while being consistent with the Vulkan specification.
Previously constant buffer updates would be handled on the CPU and only the end result would be synced to the GPU before execute. This caused issues as if the constant buffer contents was changed between each draw in a renderpass (e.g. text rendering) the draws themselves would only see the final resulting constant buffer. Fix this by updating cbufs on the GPU/CPU seperately, only ever syncing them back at the start or after a guest side CPU write, at the moment only a single word is updated at a time however this can be optimised in the future to batch all consecutive updates into one large one.
We require certain buffers to only be on the host while being accessible through the same abstractions as a guest buffer as they must be interchangeable in usage.
We needed to block stack frame lookups past JNI code as Java doesn't follow the ARMv8 frame pointer ABI which leads to invalid pointer dereferences. Any JNI function that throws or handles exceptions must do this now or it may lead to a `SIGSEGV`.
Some games may pass empty TICs as inputs to shaders while not actually using them within the shader. Create an empty texture and pass this in instead when we hit this case, the nullDescriptor feature could be used but it's not supported by all devices so we chose to do it this way instead.
Skyline's `exception` class now stores a list of all stack frames during the invocation of the exception. These can later be parsed by the exception handler to generate a human-readable stack trace. To assist with more complete stack traces, `-fno-omit-frame-pointer` is now passed on debug builds which forces the inclusion of frames on function calls.
NCE is implicitly depended on by the `GPU` class due to the NCE Memory Trapping API so the destruction of it must take place after the destruction of the `GPU` class. Additionally, to prevent bugs the NCE destructor must set `staticNce` to `nullptr` as the signal handler will potentially access a destroyed instance of NCE otherwise.
Without this sRGB textures would be interpreted as RGB leading to colours being slighly off. The sRGB flag isn't stored as part of format word so we reuse the _pad_ field of it to store the flag for the switch case.
We don't want to actually exit the process as it'll automatically be restarted gracefully due to a timeout after being unable to exit within a fixed duration so we just want to infinite sleep during termination. This should fix issues where exiting any game would cause the app to force close after some time as exception signal handling would fail in the background, the app should stay open now and automatically restart itself when another game is loaded in.
A lot of logs are incomplete due to being unable to flush inside the signal handler, now we flush after any exceptions so that there is a guarantee of any exceptions being logged as this is crucial for proper debugging.
B5G6R5 isn't generally supported by the swapchain and the format is used for R5G6B5 with swapped R/B channels to avoid aliasing so we reverse that by using R5G6B5 as the underlying Vulkan format for the swapchain which should be automatically handled by the driver for any copies from B5G6R5 textures and the data representation should be the same as B5G6R5 with swapped R/B channels so not reporting the correct texture::Format should be fine.
The DMA engine is used to perform DMA buffer/texture copies directly on the GPU. It can deswizzle arbritary regions of input textures, perform component remapping and swizzle into output textures.
This impl only supports 1D buffer copies, 2D ones will come later.
If we have a Nx1x1 image then determining the type from dimensions will result in a 1D image being created thus preventing us from creating a 2D view. By using the image view type we can avoid this for textures from TICs since we know in advance how they will be used
This enforces that the depth RT outlives the draw, without this the depth RT could be freed while in active use by command executor leading to UAFs and crashes.
This was erroneously included while migrating from older code where stack creation was entirely handled with host constructs such as `mmap` directly to using `KPrivateMemory` to manage it, we would create a guard page with `mprotect` that the guest was unaware about and would cause a segfault when a guest accessed the extents of the stack as reported to the guest.
A partial implementation of the `GetThreadContext3` SVC, we cannot return the whole thread context as the kernel only stores the registers we need according to the ARMv8 ABI convention and so far usages of this SVC do not require the unavailable registers but all future usage must be monitored and potentially require extending the amount of saved registers.
The vibration device had to be set manually prior which led to it generally not being set at all even though a user might want vibration, this commit fixes that by making controller #0 use the built-in vibrator by default.
Any Skyline files that should have been user-accessible were moved from `/data/data/skyline.emu/files` to `/sdcard/Android/data/skyline.emu/files` as the former directory is entirely private and cannot be accessed without either adb or root. This made retrieving certain data such as saves or loading custom driver shared objects extremely hard to do while this can be trivially done now.
In some games such as SMO thousands of constant buffers are bound per frame which was causing an unreasonable number of lookups in both vmm and the buffer manager. Work around this by introducing a simple hashmap based cache, eviction is currently unsupported but not really necessary yet due to the small size of the buffers in the cache.
We cannot ignore accesses from the host to a region protected by the NCE Memory Trapping API, there's often access to regions which have overlap with a protected region unintentionally and those accesses need to be handled correctly rather than leading to a crash. This is done by implementing an additional signal handler `NCE::HostSignalHandler` to lookup any potential traps on a `SIGSEGV` and handle them correctly or when there isn't a corresponding trap raise a `SIGTRAP` when debugger is connected or delegate to `signal::ExceptionalSignalHandler` when it isn't.
To cut down memory usage we now page out memory that is RW trapped via the NCE memory trapping API, the callbacks are supposed to page in the memory. This behavior is backed up by Texture/Buffer syncing which would read the host copies of data and write it to the guest, by paging the corresponding data on the guest we're avoiding redundant memory usage.
The `FileDescriptor` class is a RAII wrapper over FDs which handles their lifetimes alongside other C++ semantics such as moving and copying. It has been used in `skyline::kernel::MemoryManager` to handle the lifetime of the ashmem FD correctly, it wasn't being destroyed earlier which can result in leaking FDs across runs.
Initially this commit was only intended to update LLVM but due to a compilation error on latest LLVM libcxx due to the C++ stdlib header `<algorithm>` being a transitive dependency that is no longer transitively included on the latest LLVM libcxx (as of https://reviews.llvm.org/D119667), this required changes in Skyline and Oboe which were done in https://github.com/google/oboe/pull/1521 and the submodule has been updated to include those changes.
These are mostly used in 3D games like SMO, support is still quite basic and synchronising block linear 3D texture will crash in most cases due to them being unimplemented.
Some games crash due to requiring an `audren` version greater than 7. The `audren` version can be increased without any issues as `audren` is stubbed and therefore the reported version doesn't matter.
Older Adreno proprietary drivers (5xx and below) will segfault while destroying the renderpass and associated objects if more than 64 subpasses are within a renderpass due to internal driver implementation details. This commit introduces checks to automatically break up a renderpass when that limit is hit.
We have support for overlapping buffers which allows us to merge a lot of smaller buffers located on a single page into a single larger buffer which allows for better performance. It additionally ensures that all host buffers match the alignment guarantees of the guest and adequately fulfill host alignment requirements.
This commit encapsulates a complex sequence of cascading changes in the process of supporting overlaps for buffers:
* We determined that it is impossible to resolve overlaps with multiple intervals per buffer within the constraints of each overlap being a contiguous view, support for multiple intervals was therefore dropped. The older buffer manager code was entirely reworked to be simpler due to only handling one interval per buffer with code now being based off `IntervalMap` but tailored specifically for buffers.
* During overlap resolution, the problem of how existing views into the buffer being recreated would be updated, it had to be replaced with a larger buffer that could contain all overlaps and all existing views would need to be repointed to it. This was addressed by a buffer owning all views to itself, we could automatically recalculate the offset of all views and update the buffers with it.
* We still needed to update usage of existing views which was done by handling all access (such as inside a recorded draw) to buffer view properties via `BufferView::RegisterUsage` which dispatches a callback with the view and the corresponding backing buffer. This callback can be stored and called during overlap resolution with the new buffer.
* We had issues with lifetime of the buffer with the handle-like semantics of `BufferView` introduced in the last buffer-related commit, if we updated the view to be owned by a new buffer we'd need to extend the lifetime of the new buffer not the older one and the only way to do this was a proxy owner object `BufferDelegate` which holds a shared pointer to the real `Buffer` which in-turn holds a pointer to all `BufferDelegate` objects to update on repointing. A `BufferView` is effectively just a wrapper around `std::shared_ptr<BufferDelegate>` with more favorable semantics but generally just forwarding calls.
It should be additionally noted that to support usage of `RegisterUsage` the code around buffers in `GraphicsContext` was refactored to defer truly binding till the recording phase.
Due to an oversight, we weren't clearing the list of buffers that needed to be synced after every execution which led to them building up. Due to the relatively cheap synchronization of buffers and only doing so on faults this wasn't caught until now, it does depress the framerate significantly over time due to the size of the list growing to be in the range of 100k buffer views depending on the title.
The Kepler compute engine is used to run compute jobs encapsulated in to QMDs on the GPU, this commit doesn't implement compute itself but adds the register and QMD structs that will be needed for it in the future.
We wanted views to extend the lifetime of the underlying buffers and at the same time preserve all views until the destruction of the buffer to prevent recreation which might be costly in the future when we need `VkBufferView`s of the buffer but also require a centralized list of all views for recreation of the buffer. It also removes the inconsistency between `BufferView*` being returned in `GetXView` in `GraphicsContext`.
Alised descriptor sets are incorrectly interpreted by the shader compiler causing it to bugger up LLVM function argument types and crash
Co-authored-by: PixelyIon <pixelyion@protonmail.com>
This controls the depth range used by the shader, hades already has support for the necessary patching so we only need to pass the current mode over to it and it'll do the necessary work.
Using `eB5G6R5UnormPack16` (with a swizzle for `R5G6B5Unorm`) removes the need for `VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT` when those formats are aliased which happens in Sonic Mania among other titles.
Adreno GPUs have significant performance penalties from usage of `VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT` which require disabling UBWC and on Turnip, forces linear tiling. As a result, it's been made an optional quirk which doesn't supply the flag in `VkImageCreateInfo` and logs a warning if a view with a different Vulkan format from the original image is created.
We often need to alias the underlying data as multiple Vulkan formats which requires the `eMutableFormat` bit to be set in `VkImageCreateInfo`, without doing this there'll be validation layer errors and potentially GPU bugs.
As we no longer set the layout to general inside the Texture constructor, yet, we need it to be set prior to the image being used as an attachment. We need to transition the layout to `eGeneral` after creation of the texture object.
Any `RecyclerView`s with an app bar in a `CoordinatorLayout` would end up going off-screen due to the layout behavior implementing an offset by using a transform which would not correctly handle focusing on off-screen objects. This has now been fixed by manually adjusting height to be clipped to what is visible on the screen.
We collapse the app bar when the focus is on the app list which only occurs while using a controller, this is required as the app bar will never be collapsed otherwise. It also removes the older code to work around the limitation on `View.FOCUS_DOWN` by collapsing only when the end of the list was reached.
Removes card elevation as it visually conflicts with the scrim, this also makes the scrim a bit darker to emphasize the text and slightly reduces the border radius.
The entire layout is now selectable for grid items rather than just the card, this greatly increases the visibility of the selection when not in touch mode as the contrast of a darken effect on the icon can be minimal depending on how dark the icon already is.
The `InputStream` would not be closed after reading the key file in `KeyReader#import`, it's now wrapped with `use{ }` which handles closing the stream after usage.
Setting the refresh rate via the Display API's`preferredDisplayModeId` is an outdated method to do it on Android 11 and above, we now use `Surface#setFrameRate` alongside it to suggest a refresh rate for the display.
We incorrectly determined an Adreno driver bug to require padding between binding slots but the real issue was not supporting consecutive binding writes for `VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER` and was fixed by the padding slot unintentionally requiring individual writes. The quirk has now been corrected to explicitly specify this as the bug and the solution is more apt.
Any lookups done using `GetAlignedRecursiveRange` incorrectly added intervals in the exclusive interval entry lookups as the condition for adding them was the reverse of what it should've been due to a last minute refactor, it led to graphical glitches and crashes. This has been fixed and the lookups should return the correct results.
On certain devices, accesses to a protected memory region can return `si_code` as non-`SEGV_ACCERR` values, this leads to a crash as we only pass access violations to the trap handler and would lead to not doing so on those devices which would then result in going to the crash handler.
A large amount of Texture/Buffer views would expire before reuse could occur in `Texture::GetView`/`Buffer::GetView`. These can lead to a substantial memory allocation given enough time and they are now deleted during the lookup while iterating on all entries.
It should be noted that there are a lot of duplicate views that don't live long enough to be reused and the ultimate solution here is to make those views live long enough to be reused.
Similar to constant redundant synchronization for textures, there is a lot of redundant synchronization of buffers. Albeit, buffer synchronization is far cheaper than texture synchronization it still has associated costs which have now been reduced by only synchronizing on access.
There was a lot of redundant synchronization of textures to and from host constantly as we were not aware of guest memory access, this has now been averted by tracking any memory accesses to the texture memory using the NCE Memory Trapping API and synchronizing only when required.
An API for trapping accesses to guest memory and performing callbacks based on those accesses alongside managing protection of the memory. This is a fundamental building block for avoiding redundant synchronization of resources from the guest and host.
Note: All accesses are treated as write accesses at the moment, support for picking up read accesses will be implemented later
An interval map is a crucial piece of infrastructure required for memory faulting to track any regions that have an associated callback and their protection. Additionally, efficient page-aligned lookups with semantics optimal for memory faulting are also a requirement and the ability to associate multiple regions with a single callback/protection entry rather than doing so on a per-region basis as we deal with split-mapping resources.
This is a prerequisite to memory trapping as we need to write to the mirror to avoid a race condition with external threads writing to a texture/buffer while we do so ourselves for the sync on a read/write, it also avoids an additional `mprotect` to `-WX`/`RWX` on a read access.
An additional advantage for textures especially is that we now support split-mapping textures due to laying them out in a contiguous mirror and they will not require costly algorithmic changes. Buffers should also benefit from not needing to iterate over every region when they are split into multiple mappings.
`CreateMirror` is limited to creating a mirror of a single contiguous region which does not work when creating a contiguous mirror of multiple non-contiguous regions. To support this functionality, `CreateMirrors` which expects a list of page-aligned regions and maps them into a contiguous mirror.
We want to create arbitrary mirrors in the guest address space and to make this possible, we map the entire address space as a shared memory file. A mirror is mapped by using `mmap` with the offset into the guest address space.
Previously for methods with count > 1 the subchannel and engine would be looked up for each part of the method rather than only doing so at the start. Each call also needed to be looked up to see if it touched a macro or GPFIFO method. Fix this by doing checks outside of the main dispatch loop with templated helper lambdas to avoid needing to repeat lots of code. Maxwell3D is the only subchannel with a fast path for now but more can be added later if needed.
Almost every Maxwell format now directly corresponds to a Vulkan format. This allows formats to be passed through and the swizzle used directly from guest (with some extra swizzle handling for edge cases) thus saving the need to explicitly support each swizzle combination which is adds a lot of code bloat. The format header is additionally reordered with line breaks to separate formats by their bits-per-block.
We always submit pipeline divisor descriptions regardless of binding input rate being vertex rather than instance. This is invalid behavior and has been fixed by only submitting binding descriptors when the input rate is per-instance.
Adreno proprietary drivers suffer from a bug where `VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER` requires 2 descriptor slots rather than one, we add a padding slot to fix this issue. `QuirkManager` was introduced to handle per-vendor/per-device errata and allow enabling this on Adreno proprietary drivers specifically as to not affect the performance of other devices.
Quirk terminology was deemed to be inappropriate for describing the features/extensions of a device. It has been replaced with traits which is far more fitting but quirks will be used as a terminology for errata in devices.
The texture handle offset calculation involved an incorrect shift by descriptor size which was found to be unnecessary and would result in an invalid handle that had the wrong TIC/TSC index and caused broken rendering.
`nodes` and `syncTextures` were cleared after waiting on the `CommandExecutor` fence rather than before, this wasted execution time after the wait for something that could be performed prior to the wait.
We now attempt to enable `VK_KHR_uniform_buffer_standard_layout` when present as lax UBO layout significantly reduces complexity. If a device doesn't support this extension, we still assume that the device supports it implicitly as this has proven to be true across all major mobile GPU vendors regardless of the driver version but enabling this prevents validation layer errors.
We depend on past commands to have completed execution in a renderpass, a subpass dependency on all graphics stages from `VK_SUBPASS_EXTERNAL` to subpass #0 is used to enforce this. Nvidia and Adreno proprietary drivers implicitly do this but Turnip or Mali drivers require this or they execute out of order.
Blocklinear texture decoding was broken for padding blocks and would incorrectly decode them resulting in major texture corruption for any textures with their widths not aligned to 64 bytes. This has now been fixed with neater code which avoids redundant repetition of any code using lambdas and functions where necessary.
Stencil operations are configurable to be the same for both sides or have independent stencil state for both sides. It is controlled via the previously unimplemented `stencilTwoSideEnable`.
Fermi2D supports macros in addition to Maxwell3D, these both share code memory. To support this we rework the macro interpreter to support passing in a target engine and abstract the communications out into an interface that can be implemented by applicable engines.
```
GPFIFO <-> MME <-> Maxwell3D
^ ^---> Fermi2D
X------------> I2M
X------------> MaxwellComputeB
X--Flush-----> MaxwellDMA
```
Shader programs allocate instructions and blocks within an `ObjectPool`, there was a global pool prior that was never reaped aside from on destruction. This led to a leak where the pool would contain resources from shader programs that had been deleted, to avert this the pools are now tied to shader programs.
The size of blocklinear textures did not consider alignment to Block/ROB boundaries before, it is aligned to them now. Incorrect sizes led to textures not being aliased correctly due to different size calculations for GraphicBufferProducer surfaces and Maxwell3D color RTs.
erase invalidated `it` leading to a potential segfault if the GPU was very far behind, bail out early to avoid that since there can only be one occurence at most in the buffer anyway.
Implements the entirety of Maxwell3D Depth/Stencil state for both faces including compare/write masks and reference value. Maxwell3D register `stencilTwoSideEnable` is ignored as its behavior is unknown and could mean the same behavior for both stencils or the back facing stencil being disabled as a result of this it is unimplemented.
We don't respect the host subresource layout in synchronizing linear textures from the guest to the host when mapped to memory directly, this leads to texture corruption and while the real fix would involve respecting the host subresource layout, this has been deferred for later as real world performance advantages/disadvantages associated with this change can be observed more carefully to determine if it's worth it.
Color RTs are disabled by setting their format as `None`, it was removed while transitioning to macros and resulted in a missing format exception. It has been readded as several applications depend on this behavior.
Using `std::vector` for shader bytecode led to a lot of reallocation due to constant resizing, switching over the static vector allows for a single static allocation of the maximum possible guest shader size (1 MiB) to be done for every stage resulting in a 6 MiB preallocation which is unnoticeable given the total memory overhead of running a Switch application.
The `OneMinusSourceAlpha` blending factor was converted to `eOneMinusSrcColor` rather than `eOneMinusSrcAlpha` leading to incorrect blending behavior in certain titles. A similar issue with the order of `MinimumGL`/`MaximumGL` and `SubtractGL`/`ReverseSubtractGL` being the opposite of what it should've been, both of these issues have been fixed.
`NextSubpassNode` didn't increment `subpassIndex` which runs commands with the wrong subpass index resulting in them accessing invalid attachments or other bugs that may arise from using the wrong subpass.
All Maxwell3D state was passed by reference to the draw command lambda, this would break if there was more than one pass or the state was changed in any way before execution. All state has now been serialized by value into the draw command lambda capture, retaining state regardless of mutations of the class state.
Any usage of a resource in a command now requires attaching that resource externally and will not be implicitly attached on usage, this makes attaching of resources consistent and allows for more lax locking requirements on resources as they can be locked while attaching and don't need to be for any commands, it also avoids redundantly attaching a resource in certain cases.
If an object is attached to a `FenceCycle` twice then it would cause `FenceCycleDependency::next` to be overwritten and lead to destruction of dependencies prior to the fence being signaled causing usage of deleted resources. This commit fixes this by tracking what fence cycle a dependency is currently attached to and doesn't reattach if it's already attached to the current fence cycle.
An assumption was hardcoded into `Shader::Profile` regarding devices supporting demotion of shader invocations to helpers. This assumption wasn't backed by enabling the `VK_EXT_shader_demote_to_helper_invocation` extension via a quirk leading to assertions when it was used by the shader compiler, a quirk has now been added for the extension and is supplied to the shader compiler accordingly.
If the controller type was changed from a type with a larger amount of buttons/axes to one with a fewer amount, a crash would occur due to the transition animation retaining those elements as children yet returning `NO_POSITION` from `getChildAdapterPosition` in `DividerItemDecoration` which was an unhandled case and led to an OOB array access.
A bug caused by not passing the index argument to `ControllerActivity` led to all preferences opening the activity that pertained to Controller #1. This was fixed by passing the `index` argument in the activity launch intent.
Fixes texture corruption due to incorrect synchronization, the barrier would not enforce waiting till the texture was entirely rendered causing an incomplete texture to be downloaded which lead to rendering bugs for certain GPUs including ARM's Mali GPUs.
A bug caused an assertion if both `VK_EXT_custom_border_color` and `VK_EXT_vertex_attribute_divisor` due to mistakenly unlinking `PhysicalDeviceVertexAttributeDivisorFeaturesEXT` instead of `PhysicalDeviceCustomBorderColorFeaturesEXT` when `VK_EXT_custom_border_color` isn't supported which would potentially lead to unlinking the same structure twice and cause the assertion.
Implements inline constant buffer updates that are written to the CPU copy of the buffer rather than generating an actual inline buffer write, this works for TIC/TSC index updates but won't work when the buffer is expected to actually be updated inline with regard to sequence rather than just as a buffer upload prior to rendering.
GPU-sided constant buffer updates will be implemented later with optimizations for updating an entire range by handling GPFIFO `Inc`/`NonInc`directly and submitting it as a host inline buffer update.
There should only ever be a single instance of a `ActiveDescriptorSet` that tracks the lifetime of a descriptor set as the destructor is responsible for freeing the descriptor set.
There are cases where a new object inheriting the descriptor set needs to be created in these cases we need to have move semantics and make the destructor of the prior object inert, this allows for moving to the new object without any side effects. If the copy constructor was used in these cases the older object would free the set on its destruction which would lead to the set being invalid on existing instances which is incorrect behavior and would likely lead to driver crashes.
The descriptor sets should now contain a combined image and sampler handle for any sampled textures in the guest shader from the supplied offset into the texture constant buffer.
Note: Games tend to rely on inline constant buffer updates for writing the texture constant buffer and due to it not being implemented, the value will be read as 0 which is incorrect.
We want read semantics inside the constant buffer object via the mappings to avoid a pointless GPU VMM mapping lookup. It is a fairly frequent operation so this is necessary, the ability to write directly will be added in the future as well.
Implements parsing for the Maxwell 3D TIC pool and conversion of a TIC into a `GuestTexture`, support is limited to pitch-linear RGB565/A8R8G8B8 textures at the moment but will be extended as games utilize more formats and layouts. Support for 1D buffers is also omitted at the moment since they need special handling with them effectively being treated as buffers in Vulkan rather than images.
The pitch of the texture should always be supplied in terms of bytes as it denotes alignment on a byte boundary rather than a pixel one, it is also always utilized in terms of bytes rather than pixels so this avoids an unnecessary conversion.
Note: GBP stride unit was assumed to be pixels earlier but is likely bytes which is why there are no changes to the supplied value there, if this is not the case it'll be fixed in the future
Maxwell3D `TextureSamplerControl` (TSC) are fully converted into Vulkan samplers with extension backing for all aspects that require them (border color/reduction mode) and approximations where Vulkan doesn't support certain functionality (sampler address mode) alongside cases where extensions may not be present (border color).
Code involving caching of mappings was copied from `RenderTarget` without much consideration for applicability in buffers, the reason for caching mappings in RTs was that the view may be invalidated by more than the IOVA/Size being changed but this doesn't hold true for buffers generally so invalidation can only be on the view level with the mappings being looked up every time since the invalidation would likely change them.
`std::hash` doesn't have a generic template where it can be utilized for arbitrary trivial objects and implementing this might result in conflicts with other types. To fix this a generic templated hash is now provided as a utility structure, that can be utilized directly in hash-based containers such as `unordered_map`.
Nullability allow for optional semantics where a span may be explicitly invalidated with `nullptr` being used as a sentinel value for it and a boolean operator that allows trivial checking for if the span is valid or not.
Adds support for index buffers including U8 index buffers via the `VK_EXT_index_type_uint8` extension which has been added as an optional quirk but an exception will be thrown if the guest utilizes it but the host doesn't support it.
Add support for parsing and combining `VertexA` and `VertexB` programs into a single vertex pipeline program prior to compilation, atomic reparsing and combining is supported to only reparse the stage that was modified and recombine once at most within a single pipeline compilation.
Atomically invalidate pipeline stages as runtime information that pertains to them changes rather than never recompiling pipelines on runtime information being updated resulting in out of date pipelines or recompiling all pipelines on any runtime information updates.
Shader compilation is now broken into shader program parsing and pipeline shader compilation which will allow for supporting dual vertex shaders and more atomic invalidation depending on runtime state limiting the amount of work that is redone.
Bindings are now properly handled allowing for bound UBOs to be converted to the appropriate host UBO as designated by the shader compiler by creating Vulkan Descriptor Sets that match it.
We need this to make the distinction between a shader and pipeline stage in as shader programs are bound at a different rate than that of pipeline stage resources such as UBO.
An instance of `Shader::Backend::Bindings` must be retained across all stages for correct emission of bindings, which is now done inside `GraphicsContext::GetShaderStages`.
The vertex attribute types supplied prior were just the default which is `Float`, this works for some cases but will entirely break if the attribute type isn't a float. The attribute types are now set correctly.
Only copying a single aspect was supported by `CopyIntoStagingBuffer` earlier due to not supplying a `VkBufferImageCopy` for each aspect separately, this has now been done with Color/Depth/Stencil aspects having their own `VkBufferImageCopy` for the `VkCmdCopyImageToBuffer` command.
The definition of the `TextureView` class was spread across `texture.cpp` and has now been moved to the top of the file above the other half of the definition.
A buffer with 0 as the start/end IOVA should be invalid as there shouldn't be any mappings at 0 in GPU VA, titles such as Puyo Puyo Tetris configure the Vertex Buffer with 0 IOVAs which leads to a segmentation fault without this exception.
The lifetime of a texture and buffer view is now bound by the `FenceCycle` in `CommandExecutor`, this ensures that a `VkImageView` isn't destroyed prior to usage leading to UB.
The lifetime of all textures bound to a RenderPass alongside syncing of textures is already handled by `CommandExecutor` and doesn't need to be redundantly handled by `RenderPassNode`. It's been removed as a result of this.
Adds the depth/stencil RT as an attachment for the draw but with `VkPipelineDepthStencilStateCreateInfo` stubbed out, it'll not function correctly and the contents will not be what the guest expects them to be.
Support for clearing the depth/stencil RT has been added as its own function via either optimized `VkAttachmentLoadOp`-based clears or `vkCmdClearAttachments`. A bit of cleanup has also been done for color RT clears with the lambda for the slow-path purely calling the command rather than creating the parameter structures.
Implements `AddClearDepthStencilSubpass` in `CommandExecutor` which is similar to `ClearColorAttachment` in that it uses `VK_ATTACHMENT_LOAD_OP_CLEAR` for the clear which is far more efficient than using `VK_ATTACHMENT_LOAD_OP_LOAD` then doing the clear.
The stage/access mask for `VkSubpassDependency` were hardcoded to only be valid for color attachments earlier, this has now been fixed by branching based on the format aspect.
Sets `VkImageUsageFlags` correctly rather than hardcoding it for color attachments and adds multiple `VkBufferImageCopy` to `VkCmdCopyBufferToImage` for Color/Depth/Stencil aspects of an image.
Support the Maxwell3D Depth RT for Z-buffering, this just creates an equivalent `RenderTarget` object with no support on the API-user side (IE: `Draw` and `ClearBuffers`).
This prefixes all RT functions that deal with color RTs with `Color` and abstracts out common functions that will be used for both color and depth RTs. All common Maxwell3D structures are also moved out of the `ColorRenderTarget` (`RenderTarget` previously) structure.
To allow for caching of pipelines on the host a `VkPipelineCache` has been added, it is entirely in-memory and is not flushed to the disk which'll be done in the future alongside caching guest shaders to further avoid translation where possible.
Uses all Maxwell3D state converted into Vulkan state to do an equivalent draw on the host GPU, it sets up RT/Vertex Buffer/Vertex Attribute/Shader state and creates a stubbed out `VkPipelineLayout` for the draw. Any descriptor state isn't currently handled and is yet to be implemented, currently there's no Vulkan pipeline cache supplied which will be implemented subsequently.
We require a handle to the current renderpass and the index of the subpass in certain cases, this is now tracked by the `CommandExecutor` and passed in as a parameter to `NextSubpassFunctionNode` and the newly-introduced `SubpassFunctionNode`.
Switch from `SubmitWithCycle` to manually allocating the active command buffer to tag dependencies with the `FenceCycle` that prevents them from being mutated prior to execution. This new paradigm could also allow eager recording of commands with only submission being deferred.
`CommandScheduler` API users can now directly allocate an active command buffer that they need to manage alongside its fence, this can allow for more efficient recording as it doesn't need to be immediately submitted after, it can also allow attaching objects to a `FenceCycle` prior to submission that can be useful for locking resources.
Compiles shaders supplied by the guest with caching and automatic invalidation, the size of the shader is also automatically determined by looking for `BRA $` instructions which cause an infloop, it should be noted that we have a maximum shader bytecode size, any shader above this size will not be supported.
Graphics shaders can now be compiled using the shader compiler and emit SPIR-V that can be used on the host. The binding state isn't currently handled alongside constant buffers and textures support in `GraphicsEnvironment` yet.
The operands of the subtraction in the X/Y translation calculation were the wrong way around which led to negative translations that would translate the viewport off the screen.
The default color write mask should mask no channels and write all of them and should be mutated to mask out certain channels as required by the guest.
We cannot statically construct the vertex buffer/attribute arrays for Vulkan due to inactive attributes or buffers which isn't possible on Vulkan, we also cannot just change the count dynamically as there might be disabled buffers or attributes in the middle. We just have a `static_array` which should dynamically be filled in with buffer binding/attribute Vulkan structures before submission.
Buffers generally don't have formats that are fundamentally associated with them unless they're texel buffers, if that is the case it can be manually set in `BufferView`.