Commit Graph

1391 Commits

Author SHA1 Message Date
lynxnb
39f398f76b Update Kotlin (1.7.10), NDK (25.0.8775105), AGP (7.2.2) and Kotlin deps 2022-08-17 12:28:31 +02:00
lynxnb
e9618d9e2c Use pragma pack directions for tightly packing structs containing u128
Using `__attribute__((packed))` doesn't work in new NDKs when a struct contains 128-bit integer members, likely because of a ndk/compiler bug. We now enclose the requiring structs in `#pragma pack` directives to tightly pack them.
2022-08-17 12:22:11 +02:00
lynxnb
c4bf92a49f Fix Kotlin compilation errors from incorrect overloading of null-safe types 2022-08-17 12:16:26 +02:00
Billy Laws
bf491f71f9 Simplify blit helper shader vertex order 2022-08-10 15:43:16 +01:00
Billy Laws
c32bec071c Adjust blit src{X,Y} to account for centred sampling before calling into helper shader
Since the blit engine itself samples from pixel corners and the helper shader from pixel centres teh src coordinates need to be adjusted to avoid the helper shader wrapping round on the final column.
2022-08-10 15:39:37 +01:00
Billy Laws
08f36aac33 Enable hades vertex position input workaround for Adreno
Caused crashes in any games using geometry shaders as by default hades uses the position builtin directly.
2022-08-08 18:09:00 +01:00
Billy Laws
04e7b684d2 Enable vertexPipelineStoresAndAtomics, fragmentStoresAndAtomics and shaderStorageImageWriteWithoutFormat Vulkan features
Used by Xenoblade Chronicles DE
2022-08-08 17:43:18 +01:00
Billy Laws
390558c802 Add partial support for legacy attribute conversion
We previously missed the hades pass for attribute conversion leading to crashes when games would attempt to use such an attribute. The hades pass for this isn't a proper fix however as it modifies the IR directly and will break if any of the previous stages in the pipeline change. Enable it to allow for games using them to at least have a chance at working. In the long term the pass will be reworked on the hades side to avoid modifying the IR in a way that can't be undone.
2022-08-08 17:43:18 +01:00
Billy Laws
540437b547 Fixup index buffer view caching
We forgot to set the view size, which would end up forcing a view to be recreated with every call
2022-08-08 17:43:18 +01:00
Billy Laws
c966cd3b26 Prevent runtimeInfo vertex state from leaking into wrong shaders
This vertex state must only be present for the last pipeline stage that touches vertices, if it is present for other stages it could result in incorrect behaviour like performing TFB in the fragment shader or flipping device coordinates twice.
2022-08-08 17:43:13 +01:00
Billy Laws
c52d3195cf Ensure shader stage enable state matches pipeline stage enable state
As the code was before, if we had a shader that was disabled and enabled again after without being invalidated the pipeline stage would stay disabled and break rendering.
2022-08-08 17:40:35 +01:00
Billy Laws
b1c669ba14 Always keep the VertexB shader stage enabled
HW doesn't allow disabling the VertexB stage, enforce this in code.
2022-08-08 17:40:35 +01:00
lynxnb
d5174175d1 Implement indexed quads support
We previously only supported non-indexed quads. Support for this is implemented by converting the index buffer at record time and pushing the result into the megabuffer, which is then used as the index buffer in the final draw command.
2022-08-08 17:40:35 +01:00
lynxnb
e6741642ba Split out megabuffer allocation from pushing data
The `Allocate` method allocates the given amount of space in a megabuffer chunk, returning a descriptor of the allocated region. This is useful for situations where you want to write directly to the megabuffer, avoiding the need for an intermediary buffer.
2022-08-08 17:40:35 +01:00
Billy Laws
cdc6a4628a Enable VK uint8 indices feature when supported 2022-08-08 17:40:35 +01:00
Billy Laws
dccc86ea97 Implement transform feedback with VK_EXT_transform_feedback
Tested to work in Xenoblade Chronicles DE, the code handles both hades varying input and buffer setup.
2022-08-08 17:40:35 +01:00
Billy Laws
06053d3caf Rewrite Fermi 2D engine to use the blit helper shader
Entirely rewrites the engine and interconnect code to take advantage of the subpixel and OOB blit support offered by the blit helper shader. The interconnect code is also cleaned up significantly with the 'context' naming being dropped due to potential conflicts with the 'context' from context lock
2022-08-08 17:40:35 +01:00
Billy Laws
395f665a13 Implement a system for helper shaders together with a simple blit shader
It is desirable for us to use a shader for blits to allow easily emulating out of bounds blits and blits between different swizzled colour formats. The helper shader infrastructure is designed to be generic so it can be reused by any other helper shaders that we may  need in the future.
2022-08-08 17:40:35 +01:00
Billy Laws
1da1698f90 Disable unused Vulkan HPP setters and smart handles 2022-08-08 14:57:44 +01:00
Billy Laws
f4e58a9238 Remove redundant synchost creating a new buffer 2022-08-08 14:57:44 +01:00
Billy Laws
11a8feb037 Correct nvdrv DMA copy class ID
Was wrongly copy and pasted.
2022-08-08 14:57:44 +01:00
Billy Laws
13e7b54c61 Ensure failed IOCTLs are logged as a warning log 2022-08-08 14:57:44 +01:00
Billy Laws
eeb86a4f8a Calculate renderArea from min(attachments.dimensions...)
Vulkan doesn't support a renderArea larger than that of the smallest attachment
2022-08-08 14:57:44 +01:00
Billy Laws
9ea658d0ed Don't throw on unsupported TIC formats
These sometimes spuriously occur in games during transitions, to avoid crashing during them just use the null texture if they occur and log an error log
2022-08-08 14:57:44 +01:00
Billy Laws
856818c8eb Emulate the 'None' mipfilter by adjusting LOD
Borrowed this technique from yuzu since Vulkan has no direct equivalent
2022-08-08 14:57:44 +01:00
Billy Laws
9d50b6d0f7 Avoid locking presentation mutex in GetTransformHint
This caused slowdown in Pokemon as it was being called every frame
2022-08-08 14:57:44 +01:00
Billy Laws
460e6c9c84 Use raw pointers to hold constant buffer views
The constant destruction and creation of `BufferView`s in cbuf-heavy games showed up as a large chunk of the profiler. Fix this by taking advantage of the fact that constant buffer `BufferView`s are never deleted and always kept around in the cache to just return a pointer to them in the cache.
2022-08-08 14:54:57 +01:00
Billy Laws
6b2e84712b Avoid race in nvdrv debug prints
Looking up the device name without locking it could race with map insertions or deletions, so lock it to avoid that
2022-08-08 13:24:23 +01:00
Billy Laws
683cd594ad Use a linear allocator for most per-execution GPU allocations
Currently we heavily thrash the heap each draw, with malloc/free taking up about 10% of GPFIFOs execution time. Using a linear allocator for the main offenders of buffer usage callbacks and index/vertex state helps to reduce this to about 4%
2022-08-08 13:24:21 +01:00
Billy Laws
70eec5a414 Store delegate attached state within the delegate itself
Avoids a costly map lookup for every AttachBuffer call, this was a serious bottleneck in SMO
2022-08-08 13:23:26 +01:00
Billy Laws
0268e1d5a0 Force a submit before any i2m engine writes
We need traps to be inplace so we dont end up overwriting a resource that's being actively used by the current context without setting it to dirty
2022-08-08 13:22:37 +01:00
Billy Laws
cb0b132486 Allow supplying push constants to GetPipeline 2022-08-08 13:22:37 +01:00
Billy Laws
1c8863ec3b Use const references for holding pipeline state in pipeline cache
Allows passing in constexpr structs to state directly
2022-08-08 13:22:37 +01:00
Billy Laws
b6b04fa6c5 Use small_vector for VMM TranslateRange results
This was the source of a lot of heap allocs, moving to small_vector helps to avoid most of them
2022-08-08 13:22:37 +01:00
Billy Laws
1fe6d92970
Wait on Swapchain Image copy to complete
Certain titles can have a display frames out of order due to not waiting on the copy from the final RT to the swapchain image to occur. Although `PresentFrame` does wait on the syncpoint, that isn't enough to ensure the source texture is up-to-date due to us signalling syncpoints early. 

By waiting on the swapchain texture after the copy is submitted, we now implicitly wait on the source texture's cycle to be signalled thus waiting on the frame to be done which fixes the issue.
2022-08-07 03:12:27 +05:30
PixelyIon
5b7572a8b3
Introduce chunked MegaBuffer allocation
After the introduction of workahead a system to hold a single large megabuffer per submission was implemented, this worked fine for most cases however when many submissions were flight at the same time memory usage would increase dramatically due to the amount of megabuffers needed. Since only one megabuffer was allowed per execution, it forced the buffer to be fairly large in order to accomodate the upper-bound, even further increasing memory usage.

This commit implements a system to fix the memory usage issue described above by allowing multiple megabuffers to be allocated per execution, as well as reuse across executions. Allocations now go through a global allocator object which chooses which chunk to allocate into on a per-allocation scale, if all are in use by the GPU another chunk will be allocated, that can then be reused for future allocations too. This reduces Hollow Knight megabuffer memory usage by a factor 4 and SMO by even more.
2022-08-07 03:12:27 +05:30
Billy Laws
99b5fc35c6
Change SegmentTable semantics to respect unset entries
Accesses to unset entries is now clearly defined as returning a 0'd out value, the prior behavior would be to optimize sets for border segments to use L2 atomicity when the specific segment had no L1 entries set. This would lead to any future lookups of offsets within the same L2 segment but a different L1 entry to incorrectly return an inaccurate value as the only prior guarantee was that lookups after setting a segment would return the same value as was set but lacked the guarantee for unset segments to also consistently return unset values.

This could lead to issues in practical usages such as the `BufferManager` lookups returning the existence of a `Buffer` at a location falsely even though the segment was never set to the value, this was problematic as raw pointers were utilized and bound checks would lead to a segmentation fault.

This commit fixes this issue by introducing this guarantee and refactoring the class accordingly, it also deletes the `Set` method for setting a single entry as the meaning is ambiguous and it's functionality was more akin to the past guarantee and no longer makes sense.

Co-authored-by: PixelyIon <pixelyion@protonmail.com>
2022-08-06 22:20:54 +05:30
PixelyIon
36b8d3c445
Account for SegmentTable insertions entirely within an L2 entry
We would always write all L1 entries that correspond to an L2 entry, even if setting an input range ended before that. This would effectively reduce the atomicity of the segment table to that of the L2 range and lead to breaking API guarantees by returning entirely wrong segment values for a lookup covering a region that was overwritten.
2022-08-06 22:20:54 +05:30
PixelyIon
c72316d9f6
Rename RangeTable to SegmentTable
It was determined that `RangeTable` was too ambiguous of a name as it could be interpreted to be holding ranges rather than looking them up, to avoid confusion the terminology has been changed to `range` to `segment`. As "segment table" is more clear in describing that it is a table comprised of descriptors regarding segments and it avoids any overlaps with terminology concerning "pages" which would be overly specific for this data structure or the ambiguous "ranges".
2022-08-06 22:20:54 +05:30
Billy Laws
5398eff045
Fix KProcess::MutexUnlock PI CAS
The PI CAS in `MutexUnlock` ends up loading `basePriority` rather than `priority` which could lead to an infinite CAS loop when `basePriority` doesn't equal to `priority` and the `highestPriorityThread`'s priority is lower than `basePriority`.
2022-08-06 22:20:54 +05:30
PixelyIon
850c0f4092
Make Texture::SynchronizeGuest Blocking
It was determined that `Texture::SynchronizeGuest`'s `TextureBufferCopy` had races that were exposed by the introduction of the cycle waiter thread, the synchronization did not take place under a locked context so the texture could be mutated at any point in addition to the destructor not being run during `FenceCycle::Wait` due to `shouldDestroy` being `false`. 

This commit fixes the issue by making `SynchronizeGuest` entirely blocking as all usages of the function required blocking semantics regardless so it would be pointless to retain its async nature while solving any races that may arise from it being async.

Co-authored-by: Billy Laws <blaws05@gmail.com>
2022-08-06 22:20:54 +05:30
Billy Laws
77d15b02a3
Ensure backing continuity when recreating GPU dirty buffers
Since we don't call `SynchronizeHost` on source buffers which are GPU dirty, their mirrors will be out of date. The backing contents of this source buffer's region in the new buffer will be incorrect. By copying from the backing directly, we can ensure that no writes are lost and that if the newly created buffer needs to turn GPU dirty during recreation no copies need to be done since the backing is as up to date as the mirror at a minimum.
2022-08-06 22:20:54 +05:30
Billy Laws
c1bf5a804a
Extend stateMutex scope inside Buffer::SynchronizeHost
The code is much simpler to reason about when reading the code as it doesn't require evaluating all the potential edge cases of trap handlers in different states. It should be noted that this should not change behavior in any meaningful way, at most it can prevent a minor race where the protection could be upgraded after being downgraded by the signal handler leading to a redundant trap.
2022-08-06 22:20:54 +05:30
PixelyIon
c3cf79cb39
Rework KThread::waiterMutex Locking
Two issues exist with locking of `KThread::waiterMutex`:
* It was not always locked when accessing waiter members such as `waitThread`, `waitKey` and `waitTag` which would lead to a race that could end up in a deadlock or most notably a segfault inside `UpdatePriorityInheritance`
* There could be a deadlock from `UpdatePriorityInheritance` locking `waiterMutex` of a thread and waiting to get the owner's `waiterMutex` while on another thread `MutexUnlock` holds the owner's `waiterMutex` and waits on locking the `waiterMutex` held by `UpdatePriorityInheritance`

This commit fixes both issues by adding appropriate locking to all locations where waiter members are accessed in addition to adding a fallback mechanism inside `UpdatePriorityInheritance` that unlocks `waiterMutex` on contention to avoid a deadlock.
2022-08-06 22:20:54 +05:30
PixelyIon
68615703c1
Fix KProcess/SetThreadPriority PI CAS
The condition for exiting the CAS loops is incorrect in several places which leads to additional loops, while this doesn't make the behavior incorrect it does lead to redundant iterations. 

Co-authored-by: Billy Laws <blaws05@gmail.com>
2022-08-06 22:20:54 +05:30
PixelyIon
8fc3cc7a16
Rework Descriptor Set Allocation/Updates
A substantial amount of time would be spent on creation/destruction of `VkDescriptorSet` which scales on titles doing a substantial amount of draws with bindings, this leads to poor performance on those titles as the frametime is dragged down by performing these tasks while they repeatedly create descriptor sets of the same layouts.

This commit fixes it by pooling descriptor sets per-layout in a dynamically resizable pool and keeping them around rather than destroying them after usage which leads to the vast majority of cases not requiring a new descriptor set to even be created. It leads to significantly improved performance where it would otherwise be spent on redundant destruction/recreation or push descriptor updates which took a substantial amount of time themselves.

Additionally, the `BaseDescriptorSizes` were not kept up to date with all of the descriptor types, it led to no crashes on Adreno/Mali as they were purely used for size calculations on either driver but has been corrected to avoid any future issues.
2022-08-06 22:20:54 +05:30
PixelyIon
e1a4325137
Introduce FenceCycle Waiter Thread
A substantial amount of time is spent destroying dependencies for any threads waiting or polling `FenceCycle`s, this is not optimal as it blocks them from moving onto other tasks while destruction is a fundamentally async task and can be delayed.

This commit solves this by introducing a thread that is dedicated to waiting on every `FenceCycle` then signalling and destroying all dependencies which entirely fixes the issue of destruction blocking on more important threads.
2022-08-06 22:20:54 +05:30
PixelyIon
5f8619f791
Optimize Buffer Lookups using Range Tables
Buffer lookups are a fairly expensive operation that we currently spend `O(log n)` on the simplest and most frequent case of which is a direct match, this is a very frequent operation where that may be insufficient. This commit optimizes that case to `O(1)` by utilizing a `RangeTable` at the cost of slightly higher insertion/deletion costs for setting ranges of values but these are minimal in frequency compared to lookups.
2022-08-06 22:20:54 +05:30
PixelyIon
578ae86cca
Implement Multi-Level Range Table
A data structure that can represent the same value for a range of addresses (pages) is required for fast lookup in certain cases. This commit implements a near optimal data structure for mass insertion and O(1) lookup of range-based data, this is achieved using the host MMU and implementing multiple levels of atomicity for the ranges. 

It should be noted that the table is limited to two levels but can be extended to a variable amount of ranges in the future, it was determined that additional levels of ranges can be beneficial for performance depending on the specific use-case.
2022-08-06 22:20:54 +05:30
Billy Laws
38eab80ed8
Disable Vulkan Push Descriptors on Adreno
Adreno drivers have certain errata which leads to Vulkan Push Descriptors to be broken on them in certain cases which leads to a descriptor set update being swallowed. This has been worked around by disabling push descriptors on Adreno drivers, this may lead to reduced performance on certain titles which frequently bind new descriptors.
2022-08-06 22:20:54 +05:30