Force Textures to be Optimal on Host GPU

We don't respect the host subresource layout in synchronizing linear textures from the guest to the host when mapped to memory directly, this leads to texture corruption and while the real fix would involve respecting the host subresource layout, this has been deferred for later as real world performance advantages/disadvantages associated with this change can be observed more carefully to determine if it's worth it.
This commit is contained in:
PixelyIon 2022-01-12 01:58:46 +05:30
parent ab4962c4e4
commit 9f5c3d8ecd

View File

@ -254,7 +254,7 @@ namespace skyline::gpu {
dimensions(guest->dimensions),
format(guest->format),
layout(vk::ImageLayout::eUndefined),
tiling((guest->tileConfig.mode == texture::TileMode::Block) ? vk::ImageTiling::eOptimal : vk::ImageTiling::eLinear),
tiling(vk::ImageTiling::eOptimal), // Force Optimal due to not adhering to host subresource layout during Linear synchronization
mipLevels(1),
layerCount(guest->layerCount),
sampleCount(vk::SampleCountFlagBits::e1) {
@ -287,7 +287,7 @@ namespace skyline::gpu {
dimensions(dimensions),
format(format),
layout(initialLayout == vk::ImageLayout::ePreinitialized ? vk::ImageLayout::ePreinitialized : vk::ImageLayout::eUndefined),
tiling(tiling),
tiling(vk::ImageTiling::eOptimal), // Same as above
mipLevels(mipLevels),
layerCount(layerCount),
sampleCount(sampleCount) {