B5G6R5 isn't generally supported by the swapchain and the format is used for R5G6B5 with swapped R/B channels to avoid aliasing so we reverse that by using R5G6B5 as the underlying Vulkan format for the swapchain which should be automatically handled by the driver for any copies from B5G6R5 textures and the data representation should be the same as B5G6R5 with swapped R/B channels so not reporting the correct texture::Format should be fine.
The DMA engine is used to perform DMA buffer/texture copies directly on the GPU. It can deswizzle arbritary regions of input textures, perform component remapping and swizzle into output textures.
This impl only supports 1D buffer copies, 2D ones will come later.
If we have a Nx1x1 image then determining the type from dimensions will result in a 1D image being created thus preventing us from creating a 2D view. By using the image view type we can avoid this for textures from TICs since we know in advance how they will be used
This enforces that the depth RT outlives the draw, without this the depth RT could be freed while in active use by command executor leading to UAFs and crashes.
This was erroneously included while migrating from older code where stack creation was entirely handled with host constructs such as `mmap` directly to using `KPrivateMemory` to manage it, we would create a guard page with `mprotect` that the guest was unaware about and would cause a segfault when a guest accessed the extents of the stack as reported to the guest.
A partial implementation of the `GetThreadContext3` SVC, we cannot return the whole thread context as the kernel only stores the registers we need according to the ARMv8 ABI convention and so far usages of this SVC do not require the unavailable registers but all future usage must be monitored and potentially require extending the amount of saved registers.
The vibration device had to be set manually prior which led to it generally not being set at all even though a user might want vibration, this commit fixes that by making controller #0 use the built-in vibrator by default.
Any Skyline files that should have been user-accessible were moved from `/data/data/skyline.emu/files` to `/sdcard/Android/data/skyline.emu/files` as the former directory is entirely private and cannot be accessed without either adb or root. This made retrieving certain data such as saves or loading custom driver shared objects extremely hard to do while this can be trivially done now.
In some games such as SMO thousands of constant buffers are bound per frame which was causing an unreasonable number of lookups in both vmm and the buffer manager. Work around this by introducing a simple hashmap based cache, eviction is currently unsupported but not really necessary yet due to the small size of the buffers in the cache.
We cannot ignore accesses from the host to a region protected by the NCE Memory Trapping API, there's often access to regions which have overlap with a protected region unintentionally and those accesses need to be handled correctly rather than leading to a crash. This is done by implementing an additional signal handler `NCE::HostSignalHandler` to lookup any potential traps on a `SIGSEGV` and handle them correctly or when there isn't a corresponding trap raise a `SIGTRAP` when debugger is connected or delegate to `signal::ExceptionalSignalHandler` when it isn't.
To cut down memory usage we now page out memory that is RW trapped via the NCE memory trapping API, the callbacks are supposed to page in the memory. This behavior is backed up by Texture/Buffer syncing which would read the host copies of data and write it to the guest, by paging the corresponding data on the guest we're avoiding redundant memory usage.
The `FileDescriptor` class is a RAII wrapper over FDs which handles their lifetimes alongside other C++ semantics such as moving and copying. It has been used in `skyline::kernel::MemoryManager` to handle the lifetime of the ashmem FD correctly, it wasn't being destroyed earlier which can result in leaking FDs across runs.
Initially this commit was only intended to update LLVM but due to a compilation error on latest LLVM libcxx due to the C++ stdlib header `<algorithm>` being a transitive dependency that is no longer transitively included on the latest LLVM libcxx (as of https://reviews.llvm.org/D119667), this required changes in Skyline and Oboe which were done in https://github.com/google/oboe/pull/1521 and the submodule has been updated to include those changes.
These are mostly used in 3D games like SMO, support is still quite basic and synchronising block linear 3D texture will crash in most cases due to them being unimplemented.
Some games crash due to requiring an `audren` version greater than 7. The `audren` version can be increased without any issues as `audren` is stubbed and therefore the reported version doesn't matter.
Older Adreno proprietary drivers (5xx and below) will segfault while destroying the renderpass and associated objects if more than 64 subpasses are within a renderpass due to internal driver implementation details. This commit introduces checks to automatically break up a renderpass when that limit is hit.
We have support for overlapping buffers which allows us to merge a lot of smaller buffers located on a single page into a single larger buffer which allows for better performance. It additionally ensures that all host buffers match the alignment guarantees of the guest and adequately fulfill host alignment requirements.
This commit encapsulates a complex sequence of cascading changes in the process of supporting overlaps for buffers:
* We determined that it is impossible to resolve overlaps with multiple intervals per buffer within the constraints of each overlap being a contiguous view, support for multiple intervals was therefore dropped. The older buffer manager code was entirely reworked to be simpler due to only handling one interval per buffer with code now being based off `IntervalMap` but tailored specifically for buffers.
* During overlap resolution, the problem of how existing views into the buffer being recreated would be updated, it had to be replaced with a larger buffer that could contain all overlaps and all existing views would need to be repointed to it. This was addressed by a buffer owning all views to itself, we could automatically recalculate the offset of all views and update the buffers with it.
* We still needed to update usage of existing views which was done by handling all access (such as inside a recorded draw) to buffer view properties via `BufferView::RegisterUsage` which dispatches a callback with the view and the corresponding backing buffer. This callback can be stored and called during overlap resolution with the new buffer.
* We had issues with lifetime of the buffer with the handle-like semantics of `BufferView` introduced in the last buffer-related commit, if we updated the view to be owned by a new buffer we'd need to extend the lifetime of the new buffer not the older one and the only way to do this was a proxy owner object `BufferDelegate` which holds a shared pointer to the real `Buffer` which in-turn holds a pointer to all `BufferDelegate` objects to update on repointing. A `BufferView` is effectively just a wrapper around `std::shared_ptr<BufferDelegate>` with more favorable semantics but generally just forwarding calls.
It should be additionally noted that to support usage of `RegisterUsage` the code around buffers in `GraphicsContext` was refactored to defer truly binding till the recording phase.
Due to an oversight, we weren't clearing the list of buffers that needed to be synced after every execution which led to them building up. Due to the relatively cheap synchronization of buffers and only doing so on faults this wasn't caught until now, it does depress the framerate significantly over time due to the size of the list growing to be in the range of 100k buffer views depending on the title.
The Kepler compute engine is used to run compute jobs encapsulated in to QMDs on the GPU, this commit doesn't implement compute itself but adds the register and QMD structs that will be needed for it in the future.
We wanted views to extend the lifetime of the underlying buffers and at the same time preserve all views until the destruction of the buffer to prevent recreation which might be costly in the future when we need `VkBufferView`s of the buffer but also require a centralized list of all views for recreation of the buffer. It also removes the inconsistency between `BufferView*` being returned in `GetXView` in `GraphicsContext`.
Alised descriptor sets are incorrectly interpreted by the shader compiler causing it to bugger up LLVM function argument types and crash
Co-authored-by: PixelyIon <pixelyion@protonmail.com>
This controls the depth range used by the shader, hades already has support for the necessary patching so we only need to pass the current mode over to it and it'll do the necessary work.
Using `eB5G6R5UnormPack16` (with a swizzle for `R5G6B5Unorm`) removes the need for `VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT` when those formats are aliased which happens in Sonic Mania among other titles.
Adreno GPUs have significant performance penalties from usage of `VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT` which require disabling UBWC and on Turnip, forces linear tiling. As a result, it's been made an optional quirk which doesn't supply the flag in `VkImageCreateInfo` and logs a warning if a view with a different Vulkan format from the original image is created.
We often need to alias the underlying data as multiple Vulkan formats which requires the `eMutableFormat` bit to be set in `VkImageCreateInfo`, without doing this there'll be validation layer errors and potentially GPU bugs.
As we no longer set the layout to general inside the Texture constructor, yet, we need it to be set prior to the image being used as an attachment. We need to transition the layout to `eGeneral` after creation of the texture object.
Any `RecyclerView`s with an app bar in a `CoordinatorLayout` would end up going off-screen due to the layout behavior implementing an offset by using a transform which would not correctly handle focusing on off-screen objects. This has now been fixed by manually adjusting height to be clipped to what is visible on the screen.
We collapse the app bar when the focus is on the app list which only occurs while using a controller, this is required as the app bar will never be collapsed otherwise. It also removes the older code to work around the limitation on `View.FOCUS_DOWN` by collapsing only when the end of the list was reached.
Removes card elevation as it visually conflicts with the scrim, this also makes the scrim a bit darker to emphasize the text and slightly reduces the border radius.
The entire layout is now selectable for grid items rather than just the card, this greatly increases the visibility of the selection when not in touch mode as the contrast of a darken effect on the icon can be minimal depending on how dark the icon already is.
The `InputStream` would not be closed after reading the key file in `KeyReader#import`, it's now wrapped with `use{ }` which handles closing the stream after usage.
Setting the refresh rate via the Display API's`preferredDisplayModeId` is an outdated method to do it on Android 11 and above, we now use `Surface#setFrameRate` alongside it to suggest a refresh rate for the display.
We incorrectly determined an Adreno driver bug to require padding between binding slots but the real issue was not supporting consecutive binding writes for `VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER` and was fixed by the padding slot unintentionally requiring individual writes. The quirk has now been corrected to explicitly specify this as the bug and the solution is more apt.
Any lookups done using `GetAlignedRecursiveRange` incorrectly added intervals in the exclusive interval entry lookups as the condition for adding them was the reverse of what it should've been due to a last minute refactor, it led to graphical glitches and crashes. This has been fixed and the lookups should return the correct results.
On certain devices, accesses to a protected memory region can return `si_code` as non-`SEGV_ACCERR` values, this leads to a crash as we only pass access violations to the trap handler and would lead to not doing so on those devices which would then result in going to the crash handler.
A large amount of Texture/Buffer views would expire before reuse could occur in `Texture::GetView`/`Buffer::GetView`. These can lead to a substantial memory allocation given enough time and they are now deleted during the lookup while iterating on all entries.
It should be noted that there are a lot of duplicate views that don't live long enough to be reused and the ultimate solution here is to make those views live long enough to be reused.
Similar to constant redundant synchronization for textures, there is a lot of redundant synchronization of buffers. Albeit, buffer synchronization is far cheaper than texture synchronization it still has associated costs which have now been reduced by only synchronizing on access.
There was a lot of redundant synchronization of textures to and from host constantly as we were not aware of guest memory access, this has now been averted by tracking any memory accesses to the texture memory using the NCE Memory Trapping API and synchronizing only when required.
An API for trapping accesses to guest memory and performing callbacks based on those accesses alongside managing protection of the memory. This is a fundamental building block for avoiding redundant synchronization of resources from the guest and host.
Note: All accesses are treated as write accesses at the moment, support for picking up read accesses will be implemented later
An interval map is a crucial piece of infrastructure required for memory faulting to track any regions that have an associated callback and their protection. Additionally, efficient page-aligned lookups with semantics optimal for memory faulting are also a requirement and the ability to associate multiple regions with a single callback/protection entry rather than doing so on a per-region basis as we deal with split-mapping resources.
This is a prerequisite to memory trapping as we need to write to the mirror to avoid a race condition with external threads writing to a texture/buffer while we do so ourselves for the sync on a read/write, it also avoids an additional `mprotect` to `-WX`/`RWX` on a read access.
An additional advantage for textures especially is that we now support split-mapping textures due to laying them out in a contiguous mirror and they will not require costly algorithmic changes. Buffers should also benefit from not needing to iterate over every region when they are split into multiple mappings.
`CreateMirror` is limited to creating a mirror of a single contiguous region which does not work when creating a contiguous mirror of multiple non-contiguous regions. To support this functionality, `CreateMirrors` which expects a list of page-aligned regions and maps them into a contiguous mirror.
We want to create arbitrary mirrors in the guest address space and to make this possible, we map the entire address space as a shared memory file. A mirror is mapped by using `mmap` with the offset into the guest address space.
Previously for methods with count > 1 the subchannel and engine would be looked up for each part of the method rather than only doing so at the start. Each call also needed to be looked up to see if it touched a macro or GPFIFO method. Fix this by doing checks outside of the main dispatch loop with templated helper lambdas to avoid needing to repeat lots of code. Maxwell3D is the only subchannel with a fast path for now but more can be added later if needed.
Almost every Maxwell format now directly corresponds to a Vulkan format. This allows formats to be passed through and the swizzle used directly from guest (with some extra swizzle handling for edge cases) thus saving the need to explicitly support each swizzle combination which is adds a lot of code bloat. The format header is additionally reordered with line breaks to separate formats by their bits-per-block.
We always submit pipeline divisor descriptions regardless of binding input rate being vertex rather than instance. This is invalid behavior and has been fixed by only submitting binding descriptors when the input rate is per-instance.
Adreno proprietary drivers suffer from a bug where `VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER` requires 2 descriptor slots rather than one, we add a padding slot to fix this issue. `QuirkManager` was introduced to handle per-vendor/per-device errata and allow enabling this on Adreno proprietary drivers specifically as to not affect the performance of other devices.
Quirk terminology was deemed to be inappropriate for describing the features/extensions of a device. It has been replaced with traits which is far more fitting but quirks will be used as a terminology for errata in devices.
The texture handle offset calculation involved an incorrect shift by descriptor size which was found to be unnecessary and would result in an invalid handle that had the wrong TIC/TSC index and caused broken rendering.
`nodes` and `syncTextures` were cleared after waiting on the `CommandExecutor` fence rather than before, this wasted execution time after the wait for something that could be performed prior to the wait.
We now attempt to enable `VK_KHR_uniform_buffer_standard_layout` when present as lax UBO layout significantly reduces complexity. If a device doesn't support this extension, we still assume that the device supports it implicitly as this has proven to be true across all major mobile GPU vendors regardless of the driver version but enabling this prevents validation layer errors.
We depend on past commands to have completed execution in a renderpass, a subpass dependency on all graphics stages from `VK_SUBPASS_EXTERNAL` to subpass #0 is used to enforce this. Nvidia and Adreno proprietary drivers implicitly do this but Turnip or Mali drivers require this or they execute out of order.
Blocklinear texture decoding was broken for padding blocks and would incorrectly decode them resulting in major texture corruption for any textures with their widths not aligned to 64 bytes. This has now been fixed with neater code which avoids redundant repetition of any code using lambdas and functions where necessary.
Stencil operations are configurable to be the same for both sides or have independent stencil state for both sides. It is controlled via the previously unimplemented `stencilTwoSideEnable`.
Fermi2D supports macros in addition to Maxwell3D, these both share code memory. To support this we rework the macro interpreter to support passing in a target engine and abstract the communications out into an interface that can be implemented by applicable engines.
```
GPFIFO <-> MME <-> Maxwell3D
^ ^---> Fermi2D
X------------> I2M
X------------> MaxwellComputeB
X--Flush-----> MaxwellDMA
```
Shader programs allocate instructions and blocks within an `ObjectPool`, there was a global pool prior that was never reaped aside from on destruction. This led to a leak where the pool would contain resources from shader programs that had been deleted, to avert this the pools are now tied to shader programs.
The size of blocklinear textures did not consider alignment to Block/ROB boundaries before, it is aligned to them now. Incorrect sizes led to textures not being aliased correctly due to different size calculations for GraphicBufferProducer surfaces and Maxwell3D color RTs.
erase invalidated `it` leading to a potential segfault if the GPU was very far behind, bail out early to avoid that since there can only be one occurence at most in the buffer anyway.
Implements the entirety of Maxwell3D Depth/Stencil state for both faces including compare/write masks and reference value. Maxwell3D register `stencilTwoSideEnable` is ignored as its behavior is unknown and could mean the same behavior for both stencils or the back facing stencil being disabled as a result of this it is unimplemented.
We don't respect the host subresource layout in synchronizing linear textures from the guest to the host when mapped to memory directly, this leads to texture corruption and while the real fix would involve respecting the host subresource layout, this has been deferred for later as real world performance advantages/disadvantages associated with this change can be observed more carefully to determine if it's worth it.
Color RTs are disabled by setting their format as `None`, it was removed while transitioning to macros and resulted in a missing format exception. It has been readded as several applications depend on this behavior.
Using `std::vector` for shader bytecode led to a lot of reallocation due to constant resizing, switching over the static vector allows for a single static allocation of the maximum possible guest shader size (1 MiB) to be done for every stage resulting in a 6 MiB preallocation which is unnoticeable given the total memory overhead of running a Switch application.
The `OneMinusSourceAlpha` blending factor was converted to `eOneMinusSrcColor` rather than `eOneMinusSrcAlpha` leading to incorrect blending behavior in certain titles. A similar issue with the order of `MinimumGL`/`MaximumGL` and `SubtractGL`/`ReverseSubtractGL` being the opposite of what it should've been, both of these issues have been fixed.
`NextSubpassNode` didn't increment `subpassIndex` which runs commands with the wrong subpass index resulting in them accessing invalid attachments or other bugs that may arise from using the wrong subpass.
All Maxwell3D state was passed by reference to the draw command lambda, this would break if there was more than one pass or the state was changed in any way before execution. All state has now been serialized by value into the draw command lambda capture, retaining state regardless of mutations of the class state.
Any usage of a resource in a command now requires attaching that resource externally and will not be implicitly attached on usage, this makes attaching of resources consistent and allows for more lax locking requirements on resources as they can be locked while attaching and don't need to be for any commands, it also avoids redundantly attaching a resource in certain cases.
If an object is attached to a `FenceCycle` twice then it would cause `FenceCycleDependency::next` to be overwritten and lead to destruction of dependencies prior to the fence being signaled causing usage of deleted resources. This commit fixes this by tracking what fence cycle a dependency is currently attached to and doesn't reattach if it's already attached to the current fence cycle.
An assumption was hardcoded into `Shader::Profile` regarding devices supporting demotion of shader invocations to helpers. This assumption wasn't backed by enabling the `VK_EXT_shader_demote_to_helper_invocation` extension via a quirk leading to assertions when it was used by the shader compiler, a quirk has now been added for the extension and is supplied to the shader compiler accordingly.
If the controller type was changed from a type with a larger amount of buttons/axes to one with a fewer amount, a crash would occur due to the transition animation retaining those elements as children yet returning `NO_POSITION` from `getChildAdapterPosition` in `DividerItemDecoration` which was an unhandled case and led to an OOB array access.
A bug caused by not passing the index argument to `ControllerActivity` led to all preferences opening the activity that pertained to Controller #1. This was fixed by passing the `index` argument in the activity launch intent.
Fixes texture corruption due to incorrect synchronization, the barrier would not enforce waiting till the texture was entirely rendered causing an incomplete texture to be downloaded which lead to rendering bugs for certain GPUs including ARM's Mali GPUs.
A bug caused an assertion if both `VK_EXT_custom_border_color` and `VK_EXT_vertex_attribute_divisor` due to mistakenly unlinking `PhysicalDeviceVertexAttributeDivisorFeaturesEXT` instead of `PhysicalDeviceCustomBorderColorFeaturesEXT` when `VK_EXT_custom_border_color` isn't supported which would potentially lead to unlinking the same structure twice and cause the assertion.
Implements inline constant buffer updates that are written to the CPU copy of the buffer rather than generating an actual inline buffer write, this works for TIC/TSC index updates but won't work when the buffer is expected to actually be updated inline with regard to sequence rather than just as a buffer upload prior to rendering.
GPU-sided constant buffer updates will be implemented later with optimizations for updating an entire range by handling GPFIFO `Inc`/`NonInc`directly and submitting it as a host inline buffer update.