cemu_graphic_packs/Mods/BreathOfTheWild_HUDRemover/b88c6020a8b17332_0000000000000000_vs.txt

506 lines
19 KiB
Plaintext
Raw Normal View History

2018-11-01 20:39:18 +01:00
#version 420
#extension GL_ARB_texture_gather : enable
Update BotW packs for Vulkan (#411) But now done properly! Basically, a bunch of improvements were made to the script. The previous attempt at this conversion was quickly followed by a rollback since I realized that the script was overlooking certain things that made most of the packs hit or miss whether they would work. A few things missing were: - It only tested the values from 1 preset. Now, each shader gets compiled per each preset, like what Cemu would do. It also merges the changes done for each preset into one. This should solve cases where one shader would define things separately or repeatedly from preset to preset. - All* of the shaders are tested to see if they use the converter used the right values for the locations for Vulkan. Both of these *should* mean that they should both compile and be linkable in Vulkan, which means that I don't have to test each individual shader to see if they work. I will release the two scripts (one used for converting, one used for checking the right values for the locations) tomorrow so that other people might be able to help, if they want. It's fairly straightforward now at least. * Organize workaround graphic packs Pretty hard to organize these correctly, but according to our discord discussion, this was the best layout from a bunch I proposed, together with some suggestions. * Add V4 converter script and instructions on how to use it Now everyone BotW is done and all of the bugs have been kinked out using it (hopefully...), here's the release of the converter script in all of it's very badly coded glory. I hope I didn't leave too much debug glory in there... Also, I hope that I didn't make too many grammatical mistakes in the instructions, but hopefully it's easy enough to follow.
2019-12-28 05:55:52 +01:00
#ifdef VULKAN
#define ATTR_LAYOUT(__vkSet, __location) layout(set = __vkSet, location = __location)
#define UNIFORM_BUFFER_LAYOUT(__glLocation, __vkSet, __vkLocation) layout(set = __vkSet, binding = __vkLocation, std140)
#define TEXTURE_LAYOUT(__glLocation, __vkSet, __vkLocation) layout(set = __vkSet, binding = __vkLocation)
#define SET_POSITION(_v) gl_Position = _v; gl_Position.z = (gl_Position.z + gl_Position.w) / 2.0
#define GET_FRAGCOORD() vec4(gl_FragCoord.xy*uf_fragCoordScale.xy,gl_FragCoord.zw)
#define gl_VertexID gl_VertexIndex
#define gl_InstanceID gl_InstanceIndex
#else
#define ATTR_LAYOUT(__vkSet, __location) layout(location = __location)
#define UNIFORM_BUFFER_LAYOUT(__glLocation, __vkSet, __vkLocation) layout(binding = __glLocation, std140)
#define TEXTURE_LAYOUT(__glLocation, __vkSet, __vkLocation) layout(binding = __glLocation)
#define SET_POSITION(_v) gl_Position = _v
#define GET_FRAGCOORD() vec4(gl_FragCoord.xy*uf_fragCoordScale,gl_FragCoord.zw)
#endif
// This shaders was auto-converted from OpenGL to Cemu.
2018-11-01 20:39:18 +01:00
// shader b88c6020a8b17332
// PRO+ hud v2
Update BotW packs for Vulkan (#411) But now done properly! Basically, a bunch of improvements were made to the script. The previous attempt at this conversion was quickly followed by a rollback since I realized that the script was overlooking certain things that made most of the packs hit or miss whether they would work. A few things missing were: - It only tested the values from 1 preset. Now, each shader gets compiled per each preset, like what Cemu would do. It also merges the changes done for each preset into one. This should solve cases where one shader would define things separately or repeatedly from preset to preset. - All* of the shaders are tested to see if they use the converter used the right values for the locations for Vulkan. Both of these *should* mean that they should both compile and be linkable in Vulkan, which means that I don't have to test each individual shader to see if they work. I will release the two scripts (one used for converting, one used for checking the right values for the locations) tomorrow so that other people might be able to help, if they want. It's fairly straightforward now at least. * Organize workaround graphic packs Pretty hard to organize these correctly, but according to our discord discussion, this was the best layout from a bunch I proposed, together with some suggestions. * Add V4 converter script and instructions on how to use it Now everyone BotW is done and all of the bugs have been kinked out using it (hopefully...), here's the release of the converter script in all of it's very badly coded glory. I hope I didn't leave too much debug glory in there... Also, I hope that I didn't make too many grammatical mistakes in the instructions, but hopefully it's easy enough to follow.
2019-12-28 05:55:52 +01:00
#ifdef VULKAN
layout(set = 0, binding = 0) uniform ufBlock
{
uniform ivec4 uf_remappedVS[17];
// uniform vec2 uf_windowSpaceToClipSpaceTransform; // Cemu optimized this uf_variable away in Cemu 1.15.7
};
#else
2018-11-01 20:39:18 +01:00
uniform ivec4 uf_remappedVS[17];
Update BotW packs for Vulkan (#411) But now done properly! Basically, a bunch of improvements were made to the script. The previous attempt at this conversion was quickly followed by a rollback since I realized that the script was overlooking certain things that made most of the packs hit or miss whether they would work. A few things missing were: - It only tested the values from 1 preset. Now, each shader gets compiled per each preset, like what Cemu would do. It also merges the changes done for each preset into one. This should solve cases where one shader would define things separately or repeatedly from preset to preset. - All* of the shaders are tested to see if they use the converter used the right values for the locations for Vulkan. Both of these *should* mean that they should both compile and be linkable in Vulkan, which means that I don't have to test each individual shader to see if they work. I will release the two scripts (one used for converting, one used for checking the right values for the locations) tomorrow so that other people might be able to help, if they want. It's fairly straightforward now at least. * Organize workaround graphic packs Pretty hard to organize these correctly, but according to our discord discussion, this was the best layout from a bunch I proposed, together with some suggestions. * Add V4 converter script and instructions on how to use it Now everyone BotW is done and all of the bugs have been kinked out using it (hopefully...), here's the release of the converter script in all of it's very badly coded glory. I hope I didn't leave too much debug glory in there... Also, I hope that I didn't make too many grammatical mistakes in the instructions, but hopefully it's easy enough to follow.
2019-12-28 05:55:52 +01:00
// uniform vec2 uf_windowSpaceToClipSpaceTransform; // Cemu optimized this uf_variable away in Cemu 1.15.7
#endif
// uf_windowSpaceToClipSpaceTransform was moved to the ufBlock
ATTR_LAYOUT(0, 0) in uvec4 attrDataSem0;
2018-11-01 20:39:18 +01:00
out gl_PerVertex
{
vec4 gl_Position;
float gl_PointSize;
};
layout(location = 0) out vec4 passParameterSem0;
int clampFI32(int v)
{
if( v == 0x7FFFFFFF )
return floatBitsToInt(1.0);
else if( v == 0xFFFFFFFF )
return floatBitsToInt(0.0);
return floatBitsToInt(clamp(intBitsToFloat(v), 0.0, 1.0));
}
float mul_nonIEEE(float a, float b){ if( a == 0.0 || b == 0.0 ) return 0.0; return a*b; }
bool isCurrentSizeEqualTo(vec2 param) {
float result = distance(param, intBitsToFloat(uf_remappedVS[0]).xy);
return (result <= 0.001);
}
void main()
{
ivec4 R0i = ivec4(0);
ivec4 R1i = ivec4(0);
ivec4 R2i = ivec4(0);
ivec4 R122i = ivec4(0);
ivec4 R123i = ivec4(0);
ivec4 R124i = ivec4(0);
ivec4 R125i = ivec4(0);
ivec4 R126i = ivec4(0);
ivec4 R127i = ivec4(0);
uvec4 attrDecoder;
int backupReg0i, backupReg1i, backupReg2i, backupReg3i, backupReg4i;
ivec4 PV0i = ivec4(0), PV1i = ivec4(0);
int PS0i = 0, PS1i = 0;
ivec4 tempi = ivec4(0);
float tempResultf;
int tempResulti;
ivec4 ARi = ivec4(0);
bool predResult = true;
bool activeMaskStack[4];
bool activeMaskStackC[5];
activeMaskStack[0] = false;
activeMaskStack[1] = false;
activeMaskStack[2] = false;
activeMaskStackC[0] = false;
activeMaskStackC[1] = false;
activeMaskStackC[2] = false;
activeMaskStackC[3] = false;
activeMaskStack[0] = true;
activeMaskStackC[0] = true;
activeMaskStackC[1] = true;
vec3 cubeMapSTM;
int cubeMapFaceId;
R0i = ivec4(gl_VertexID, 0, 0, gl_InstanceID);
attrDecoder.xy = attrDataSem0.xy;
attrDecoder.xy = (attrDecoder.xy>>24)|((attrDecoder.xy>>8)&0xFF00)|((attrDecoder.xy<<8)&0xFF0000)|((attrDecoder.xy<<24));
attrDecoder.z = 0;
attrDecoder.w = 0;
R1i = ivec4(int(attrDecoder.x), int(attrDecoder.y), floatBitsToInt(0.0), floatBitsToInt(1.0));
if( activeMaskStackC[1] == true ) {
activeMaskStack[1] = activeMaskStack[0];
activeMaskStackC[2] = activeMaskStackC[1];
// 0
PV0i.x = floatBitsToInt(mul_nonIEEE(intBitsToFloat(R1i.x), intBitsToFloat(uf_remappedVS[0].x)));
R0i.yzw = ivec3(floatBitsToInt(-(intBitsToFloat(R1i.y))),0,0x3f800000);
PV0i.y = R0i.y;
R127i.w = floatBitsToInt(1.0);
PS0i = R127i.w;
// 1
R0i.x = floatBitsToInt(intBitsToFloat(PV0i.x) + intBitsToFloat(uf_remappedVS[0].z));
PV1i.z = floatBitsToInt(mul_nonIEEE(intBitsToFloat(PV0i.y), intBitsToFloat(uf_remappedVS[0].y)));
// 2
R0i.y = floatBitsToInt(intBitsToFloat(PV1i.z) + intBitsToFloat(uf_remappedVS[0].w));
PV0i.y = R0i.y;
R1i.w = uf_remappedVS[1].x & 0x40000000;
// 3
backupReg0i = R0i.z;
backupReg1i = R0i.w;
R127i.x = floatBitsToInt(dot(vec4(intBitsToFloat(R0i.x),intBitsToFloat(PV0i.y),intBitsToFloat(backupReg0i),intBitsToFloat(backupReg1i)),vec4(intBitsToFloat(uf_remappedVS[2].x),intBitsToFloat(uf_remappedVS[2].y),intBitsToFloat(uf_remappedVS[2].z),intBitsToFloat(uf_remappedVS[2].w))));
PV1i.x = R127i.x;
PV1i.y = R127i.x;
PV1i.z = R127i.x;
PV1i.w = R127i.x;
// 4
backupReg0i = R0i.x;
backupReg1i = R0i.z;
backupReg2i = R0i.w;
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(backupReg0i),intBitsToFloat(R0i.y),intBitsToFloat(backupReg1i),intBitsToFloat(backupReg2i)),vec4(intBitsToFloat(uf_remappedVS[3].x),intBitsToFloat(uf_remappedVS[3].y),intBitsToFloat(uf_remappedVS[3].z),intBitsToFloat(uf_remappedVS[3].w))));
PV0i.x = tempi.x;
PV0i.y = tempi.x;
PV0i.z = tempi.x;
PV0i.w = tempi.x;
R127i.y = tempi.x;
// 5
backupReg0i = R0i.x;
backupReg1i = R0i.y;
backupReg2i = R0i.w;
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(backupReg0i),intBitsToFloat(backupReg1i),intBitsToFloat(R0i.z),intBitsToFloat(backupReg2i)),vec4(intBitsToFloat(uf_remappedVS[4].x),intBitsToFloat(uf_remappedVS[4].y),intBitsToFloat(uf_remappedVS[4].z),intBitsToFloat(uf_remappedVS[4].w))));
PV1i.x = tempi.x;
PV1i.y = tempi.x;
PV1i.z = tempi.x;
PV1i.w = tempi.x;
R127i.z = tempi.x;
// 6
R2i.x = floatBitsToInt(dot(vec4(intBitsToFloat(R127i.x),intBitsToFloat(R127i.y),intBitsToFloat(PV1i.x),intBitsToFloat(R127i.w)),vec4(intBitsToFloat(uf_remappedVS[5].x),intBitsToFloat(uf_remappedVS[5].y),intBitsToFloat(uf_remappedVS[5].z),intBitsToFloat(uf_remappedVS[5].w))));
PV0i.x = R2i.x;
PV0i.y = R2i.x;
PV0i.z = R2i.x;
PV0i.w = R2i.x;
// 7
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(R127i.x),intBitsToFloat(R127i.y),intBitsToFloat(R127i.z),intBitsToFloat(R127i.w)),vec4(intBitsToFloat(uf_remappedVS[6].x),intBitsToFloat(uf_remappedVS[6].y),intBitsToFloat(uf_remappedVS[6].z),intBitsToFloat(uf_remappedVS[6].w))));
PV1i.x = tempi.x;
PV1i.y = tempi.x;
PV1i.z = tempi.x;
PV1i.w = tempi.x;
R2i.y = tempi.x;
// 8
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(R127i.x),intBitsToFloat(R127i.y),intBitsToFloat(R127i.z),intBitsToFloat(R127i.w)),vec4(intBitsToFloat(uf_remappedVS[7].x),intBitsToFloat(uf_remappedVS[7].y),intBitsToFloat(uf_remappedVS[7].z),intBitsToFloat(uf_remappedVS[7].w))));
PV0i.x = tempi.x;
PV0i.y = tempi.x;
PV0i.z = tempi.x;
PV0i.w = tempi.x;
R2i.z = tempi.x;
// 9
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(R127i.x),intBitsToFloat(R127i.y),intBitsToFloat(R127i.z),intBitsToFloat(R127i.w)),vec4(intBitsToFloat(uf_remappedVS[8].x),intBitsToFloat(uf_remappedVS[8].y),intBitsToFloat(uf_remappedVS[8].z),intBitsToFloat(uf_remappedVS[8].w))));
PV1i.x = tempi.x;
PV1i.y = tempi.x;
PV1i.z = tempi.x;
PV1i.w = tempi.x;
R2i.w = tempi.x;
// 10
predResult = (0 != R1i.w);
activeMaskStack[1] = predResult;
activeMaskStackC[2] = predResult == true && activeMaskStackC[1] == true;
}
else {
activeMaskStack[1] = false;
activeMaskStackC[2] = false;
}
if( activeMaskStackC[2] == true ) {
activeMaskStack[2] = activeMaskStack[1];
activeMaskStackC[3] = activeMaskStackC[2];
// 0
R1i.z = uf_remappedVS[1].x & 0x00000002;
// 1
predResult = (0 != R1i.z);
activeMaskStack[2] = predResult;
activeMaskStackC[3] = predResult == true && activeMaskStackC[2] == true;
}
else {
activeMaskStack[2] = false;
activeMaskStackC[3] = false;
}
if( activeMaskStackC[3] == true ) {
// 0
backupReg0i = R1i.y;
R1i.yzw = ivec3(backupReg0i,0,0x3f800000);
}
activeMaskStack[2] = activeMaskStack[2] == false;
activeMaskStackC[3] = activeMaskStack[2] == true && activeMaskStackC[2] == true;
if( activeMaskStackC[3] == true ) {
// 0
PS0i = int(intBitsToFloat(R1i.y));
// 1
PV1i.x = PS0i << int(1);
PS1i = int(intBitsToFloat(R1i.x));
// 2
R126i.z = PV1i.x + PS1i;
PV0i.z = R126i.z;
// 3
R127i.x = (PV0i.z == 0x00000002)?int(0xFFFFFFFF):int(0x0);
PV1i.x = R127i.x;
R127i.y = (PV0i.z == int(1))?int(0xFFFFFFFF):int(0x0);
// 4
R127i.z = ((PV1i.x == 0)?(uf_remappedVS[9].y):(uf_remappedVS[10].y));
R127i.w = ((PV1i.x == 0)?(uf_remappedVS[9].x):(uf_remappedVS[10].x));
// 5
R123i.x = ((R127i.x == 0)?(uf_remappedVS[9].w):(uf_remappedVS[10].w));
PV1i.x = R123i.x;
R123i.y = ((R127i.x == 0)?(uf_remappedVS[9].z):(uf_remappedVS[10].z));
PV1i.y = R123i.y;
// 6
R123i.x = ((R127i.y == 0)?(PV1i.x):(uf_remappedVS[11].w));
PV0i.x = R123i.x;
R123i.y = ((R127i.y == 0)?(PV1i.y):(uf_remappedVS[11].z));
PV0i.y = R123i.y;
R123i.z = ((R127i.y == 0)?(R127i.z):(uf_remappedVS[11].y));
PV0i.z = R123i.z;
R123i.w = ((R127i.y == 0)?(R127i.w):(uf_remappedVS[11].x));
PV0i.w = R123i.w;
// 7
R1i.x = ((R126i.z == 0)?(uf_remappedVS[12].x):(PV0i.w));
R1i.y = ((R126i.z == 0)?(uf_remappedVS[12].y):(PV0i.z));
R1i.z = ((R126i.z == 0)?(uf_remappedVS[12].z):(PV0i.y));
R1i.w = ((R126i.z == 0)?(uf_remappedVS[12].w):(PV0i.x));
}
activeMaskStackC[2] = activeMaskStack[1] == true && activeMaskStackC[1] == true;
activeMaskStack[1] = activeMaskStack[1] == false;
activeMaskStackC[2] = activeMaskStack[1] == true && activeMaskStackC[1] == true;
if( activeMaskStackC[2] == true ) {
activeMaskStack[2] = activeMaskStack[1];
activeMaskStackC[3] = activeMaskStackC[2];
// 0
R1i.w = uf_remappedVS[1].x & 0x00000002;
// 1
predResult = (0 != R1i.w);
activeMaskStack[2] = predResult;
activeMaskStackC[3] = predResult == true && activeMaskStackC[2] == true;
}
else {
activeMaskStack[2] = false;
activeMaskStackC[3] = false;
}
if( activeMaskStackC[3] == true ) {
// 0
backupReg0i = R1i.y;
R1i.yzw = ivec3(backupReg0i,0,0x3f800000);
}
activeMaskStack[2] = activeMaskStack[2] == false;
activeMaskStackC[3] = activeMaskStack[2] == true && activeMaskStackC[2] == true;
if( activeMaskStackC[3] == true ) {
// 0
PS0i = int(intBitsToFloat(R1i.y));
// 1
PV1i.x = PS0i << int(1);
PS1i = int(intBitsToFloat(R1i.x));
// 2
R126i.z = PV1i.x + PS1i;
PV0i.z = R126i.z;
// 3
R127i.x = (PV0i.z == 0x00000002)?int(0xFFFFFFFF):int(0x0);
PV1i.x = R127i.x;
R127i.y = (PV0i.z == int(1))?int(0xFFFFFFFF):int(0x0);
// 4
R127i.z = ((PV1i.x == 0)?(uf_remappedVS[9].y):(uf_remappedVS[10].y));
R127i.w = ((PV1i.x == 0)?(uf_remappedVS[9].x):(uf_remappedVS[10].x));
// 5
R123i.x = ((R127i.x == 0)?(uf_remappedVS[9].w):(uf_remappedVS[10].w));
PV1i.x = R123i.x;
R123i.y = ((R127i.x == 0)?(uf_remappedVS[9].z):(uf_remappedVS[10].z));
PV1i.y = R123i.y;
// 6
R123i.x = ((R127i.y == 0)?(PV1i.x):(uf_remappedVS[11].w));
PV0i.x = R123i.x;
R123i.y = ((R127i.y == 0)?(PV1i.y):(uf_remappedVS[11].z));
PV0i.y = R123i.y;
R123i.z = ((R127i.y == 0)?(R127i.z):(uf_remappedVS[11].y));
PV0i.z = R123i.z;
R123i.w = ((R127i.y == 0)?(R127i.w):(uf_remappedVS[11].x));
PV0i.w = R123i.w;
// 7
R1i.x = ((R126i.z == 0)?(uf_remappedVS[12].x):(PV0i.w));
R1i.y = ((R126i.z == 0)?(uf_remappedVS[12].y):(PV0i.z));
R1i.z = ((R126i.z == 0)?(uf_remappedVS[12].z):(PV0i.y));
R1i.w = ((R126i.z == 0)?(uf_remappedVS[12].w):(PV0i.x));
}
activeMaskStackC[2] = activeMaskStack[1] == true && activeMaskStackC[1] == true;
activeMaskStackC[1] = activeMaskStack[0] == true && activeMaskStackC[0] == true;
if( activeMaskStackC[1] == true ) {
activeMaskStack[1] = activeMaskStack[0];
activeMaskStackC[2] = activeMaskStackC[1];
// 0
predResult = (0 != uf_remappedVS[13].x);
activeMaskStack[1] = predResult;
activeMaskStackC[2] = predResult == true && activeMaskStackC[1] == true;
}
else {
activeMaskStack[1] = false;
activeMaskStackC[2] = false;
}
if( activeMaskStackC[2] == true ) {
// 0
backupReg0i = R0i.y;
backupReg1i = R0i.z;
backupReg2i = R0i.w;
R1i.x = floatBitsToInt(dot(vec4(intBitsToFloat(R0i.x),intBitsToFloat(backupReg0i),intBitsToFloat(backupReg1i),intBitsToFloat(backupReg2i)),vec4(intBitsToFloat(uf_remappedVS[12].x),intBitsToFloat(uf_remappedVS[12].y),intBitsToFloat(uf_remappedVS[12].z),intBitsToFloat(uf_remappedVS[12].w))));
PV0i.x = R1i.x;
PV0i.y = R1i.x;
PV0i.z = R1i.x;
PV0i.w = R1i.x;
// 1
backupReg0i = R0i.x;
backupReg1i = R0i.z;
backupReg2i = R0i.w;
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(backupReg0i),intBitsToFloat(R0i.y),intBitsToFloat(backupReg1i),intBitsToFloat(backupReg2i)),vec4(intBitsToFloat(uf_remappedVS[11].x),intBitsToFloat(uf_remappedVS[11].y),intBitsToFloat(uf_remappedVS[11].z),intBitsToFloat(uf_remappedVS[11].w))));
PV1i.x = tempi.x;
PV1i.y = tempi.x;
PV1i.z = tempi.x;
PV1i.w = tempi.x;
R1i.y = tempi.x;
// 2
backupReg0i = R0i.x;
backupReg1i = R0i.y;
backupReg2i = R0i.w;
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(backupReg0i),intBitsToFloat(backupReg1i),intBitsToFloat(R0i.z),intBitsToFloat(backupReg2i)),vec4(intBitsToFloat(uf_remappedVS[10].x),intBitsToFloat(uf_remappedVS[10].y),intBitsToFloat(uf_remappedVS[10].z),intBitsToFloat(uf_remappedVS[10].w))));
PV0i.x = tempi.x;
PV0i.y = tempi.x;
PV0i.z = tempi.x;
PV0i.w = tempi.x;
R1i.z = tempi.x;
// 3
backupReg0i = R0i.x;
backupReg1i = R0i.y;
backupReg2i = R0i.z;
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(backupReg0i),intBitsToFloat(backupReg1i),intBitsToFloat(backupReg2i),intBitsToFloat(R0i.w)),vec4(intBitsToFloat(uf_remappedVS[9].x),intBitsToFloat(uf_remappedVS[9].y),intBitsToFloat(uf_remappedVS[9].z),intBitsToFloat(uf_remappedVS[9].w))));
PV1i.x = tempi.x;
PV1i.y = tempi.x;
PV1i.z = tempi.x;
PV1i.w = tempi.x;
R1i.w = tempi.x;
}
activeMaskStack[1] = activeMaskStack[1] == false;
activeMaskStackC[2] = activeMaskStack[1] == true && activeMaskStackC[1] == true;
if( activeMaskStackC[2] == true ) {
activeMaskStack[2] = activeMaskStack[1];
activeMaskStackC[3] = activeMaskStackC[2];
// 0
R0i.x = uf_remappedVS[1].x & int(1);
// 1
backupReg0i = R0i.x;
predResult = (0 != backupReg0i);
activeMaskStack[2] = predResult;
activeMaskStackC[3] = predResult == true && activeMaskStackC[2] == true;
}
else {
activeMaskStack[2] = false;
activeMaskStackC[3] = false;
}
if( activeMaskStackC[3] == true ) {
activeMaskStack[3] = activeMaskStack[2];
activeMaskStackC[4] = activeMaskStackC[3];
// 0
R0i.w = uf_remappedVS[1].x & 0x00000002;
// 1
predResult = (0 != R0i.w);
activeMaskStack[3] = predResult;
activeMaskStackC[4] = predResult == true && activeMaskStackC[3] == true;
}
else {
activeMaskStack[3] = false;
activeMaskStackC[4] = false;
}
if( activeMaskStackC[4] == true ) {
// 0
R127i.x = uf_remappedVS[1].x & 0x00010000;
R127i.y = uf_remappedVS[1].x & 0x00000010;
PV0i.y = R127i.y;
R125i.z = uf_remappedVS[1].x & 0x00000004;
R127i.w = uf_remappedVS[1].x & 0x00000020;
// 1
PV1i.x = floatBitsToInt(mul_nonIEEE(intBitsToFloat(uf_remappedVS[0].y), intBitsToFloat(uf_remappedVS[14].y)));
PV1i.y = floatBitsToInt(mul_nonIEEE(intBitsToFloat(uf_remappedVS[0].x), intBitsToFloat(uf_remappedVS[14].x)));
PV1i.z = floatBitsToInt(mul_nonIEEE(intBitsToFloat(uf_remappedVS[0].y), intBitsToFloat(uf_remappedVS[14].x)));
PV1i.w = floatBitsToInt(mul_nonIEEE(intBitsToFloat(uf_remappedVS[0].x), intBitsToFloat(uf_remappedVS[14].y)));
R127i.z = ((PV0i.y == 0)?(R1i.y):(R1i.y));
PS1i = R127i.z;
// 2
R123i.x = ((R127i.y == 0)?(R1i.w):(R1i.w));
PV0i.x = R123i.x;
R123i.y = ((R127i.y == 0)?(R1i.z):(R1i.z));
PV0i.y = R123i.y;
R126i.z = ((R127i.x == 0)?(PV1i.x):(PV1i.z));
PV0i.z = R126i.z;
R123i.w = ((R127i.x == 0)?(PV1i.y):(PV1i.w));
PV0i.w = R123i.w;
R124i.y = uf_remappedVS[1].x & 0x00000008;
PS0i = R124i.y;
// 3
R0i.x = floatBitsToInt(mul_nonIEEE(intBitsToFloat(R1i.x), intBitsToFloat(PV0i.w)));
PV1i.x = R0i.x;
R126i.y = floatBitsToInt(-(intBitsToFloat(PV0i.w)) + 1.0);
PV1i.z = floatBitsToInt(mul_nonIEEE(intBitsToFloat(R127i.z), intBitsToFloat(PV0i.z)));
R123i.w = ((R127i.w == 0)?(PV0i.y):(PV0i.y));
PV1i.w = R123i.w;
R122i.x = ((R127i.w == 0)?(PV0i.x):(PV0i.x));
PS1i = R122i.x;
// 4
backupReg0i = R127i.y;
R123i.x = ((R127i.w == 0)?(R127i.z):(PV1i.z));
PV0i.x = R123i.x;
R127i.y = floatBitsToInt(-(intBitsToFloat(R126i.z)) + 1.0);
R127i.z = ((R125i.z == 0)?(PV1i.w):(PV1i.w));
R123i.w = ((backupReg0i == 0)?(R1i.x):(PV1i.x));
PV0i.w = R123i.w;
R126i.z = ((R125i.z == 0)?(PS1i):(PS1i));
PS0i = R126i.z;
// 5
R127i.x = ((R125i.z == 0)?(PV0i.x):(PV0i.x));
PV1i.x = R127i.x;
R125i.y = ((R127i.w == 0)?(PV0i.w):(PV0i.w));
PV1i.y = R125i.y;
// 6
R126i.x = floatBitsToInt(intBitsToFloat(PV1i.x) + intBitsToFloat(R127i.y));
PV0i.z = floatBitsToInt(intBitsToFloat(PV1i.y) + intBitsToFloat(R126i.y));
// 7
R123i.y = ((R125i.z == 0)?(R125i.y):(PV0i.z));
PV1i.y = R123i.y;
// 8
R1i.x = ((R124i.y == 0)?(PV1i.y):(PV1i.y));
R1i.y = ((R124i.y == 0)?(R127i.x):(R126i.x));
R1i.z = ((R124i.y == 0)?(R127i.z):(R127i.z));
R1i.w = ((R124i.y == 0)?(R126i.z):(R126i.z));
PS0i = R1i.w;
}
activeMaskStackC[3] = activeMaskStack[2] == true && activeMaskStackC[2] == true;
if( activeMaskStackC[3] == true ) {
// 0
R0i.x = floatBitsToInt(-(intBitsToFloat(R1i.x)) + 1.0);
PV0i.x = R0i.x;
R126i.y = uf_remappedVS[1].x & 0x00010000;
R127i.z = uf_remappedVS[1].x & 0x00040000;
PV0i.w = uf_remappedVS[1].x & 0x00020000;
// 1
R123i.x = ((PV0i.w == 0)?(R1i.z):(R1i.z));
PV1i.x = R123i.x;
R127i.y = ((PV0i.w == 0)?(R1i.y):(R1i.y));
PV1i.y = R127i.y;
R123i.z = ((PV0i.w == 0)?(R1i.x):(PV0i.x));
PV1i.z = R123i.z;
R123i.w = ((PV0i.w == 0)?(R1i.w):(R1i.w));
PV1i.w = R123i.w;
// 2
R127i.x = ((R127i.z == 0)?(PV1i.z):(PV1i.z));
PV0i.y = floatBitsToInt(-(intBitsToFloat(PV1i.y)) + 1.0);
R126i.z = ((R127i.z == 0)?(PV1i.x):(PV1i.x));
R127i.w = ((R127i.z == 0)?(PV1i.w):(PV1i.w));
// 3
R123i.w = ((R127i.z == 0)?(R127i.y):(PV0i.y));
PV1i.w = R123i.w;
// 4
R1i.x = ((R126i.y == 0)?(R127i.x):(PV1i.w));
R1i.y = ((R126i.y == 0)?(PV1i.w):(R127i.x));
R1i.z = ((R126i.y == 0)?(R126i.z):(R126i.z));
R1i.w = ((R126i.y == 0)?(R127i.w):(R127i.w));
}
activeMaskStackC[1] = activeMaskStack[0] == true && activeMaskStackC[0] == true;
if( activeMaskStackC[1] == true ) {
// 0
tempi.x = floatBitsToInt(dot(vec4(intBitsToFloat(R1i.x),intBitsToFloat(R1i.y),intBitsToFloat(R1i.z),intBitsToFloat(R1i.w)),vec4(intBitsToFloat(uf_remappedVS[15].x),intBitsToFloat(uf_remappedVS[15].y),intBitsToFloat(uf_remappedVS[15].z),intBitsToFloat(uf_remappedVS[15].w))));
PV0i.x = tempi.x;
PV0i.y = tempi.x;
PV0i.z = tempi.x;
PV0i.w = tempi.x;
// 1
PV1i.x = PV0i.x;
R0i.w = PV0i.x;
PS1i = floatBitsToInt(mul_nonIEEE(intBitsToFloat(R1i.w), intBitsToFloat(uf_remappedVS[16].w)));
// 2
R0i.x = floatBitsToInt(dot(vec4(intBitsToFloat(R1i.x),intBitsToFloat(R1i.y),intBitsToFloat(R1i.z),intBitsToFloat(PS1i)),vec4(intBitsToFloat(uf_remappedVS[16].x),intBitsToFloat(uf_remappedVS[16].y),intBitsToFloat(uf_remappedVS[16].z),1.0)));
PV0i.x = R0i.x;
PV0i.y = R0i.x;
PV0i.z = R0i.x;
PV0i.w = R0i.x;
R0i.y = floatBitsToInt(-(intBitsToFloat(PV1i.x)) + 1.0);
PS0i = R0i.y;
}
// export
Update BotW packs for Vulkan (#411) But now done properly! Basically, a bunch of improvements were made to the script. The previous attempt at this conversion was quickly followed by a rollback since I realized that the script was overlooking certain things that made most of the packs hit or miss whether they would work. A few things missing were: - It only tested the values from 1 preset. Now, each shader gets compiled per each preset, like what Cemu would do. It also merges the changes done for each preset into one. This should solve cases where one shader would define things separately or repeatedly from preset to preset. - All* of the shaders are tested to see if they use the converter used the right values for the locations for Vulkan. Both of these *should* mean that they should both compile and be linkable in Vulkan, which means that I don't have to test each individual shader to see if they work. I will release the two scripts (one used for converting, one used for checking the right values for the locations) tomorrow so that other people might be able to help, if they want. It's fairly straightforward now at least. * Organize workaround graphic packs Pretty hard to organize these correctly, but according to our discord discussion, this was the best layout from a bunch I proposed, together with some suggestions. * Add V4 converter script and instructions on how to use it Now everyone BotW is done and all of the bugs have been kinked out using it (hopefully...), here's the release of the converter script in all of it's very badly coded glory. I hope I didn't leave too much debug glory in there... Also, I hope that I didn't make too many grammatical mistakes in the instructions, but hopefully it's easy enough to follow.
2019-12-28 05:55:52 +01:00
SET_POSITION(vec4(intBitsToFloat(R2i.x), intBitsToFloat(R2i.y), intBitsToFloat(R2i.z), intBitsToFloat(R2i.w)));
2018-11-01 20:39:18 +01:00
if (//isCurrentSizeEqualTo(vec2(36, 36)) || //master mode logo
isCurrentSizeEqualTo(vec2(80, 80)) ||
isCurrentSizeEqualTo(vec2(84, 84)) ||
isCurrentSizeEqualTo(vec2(90, 90)) //stamina red with weapon
) {
gl_Position.y -= 9000.0;
}
if (isCurrentSizeEqualTo(vec2(32, 32)) && // hearts but also inventory icons
(uf_remappedVS[3].w == 1133903872 || uf_remappedVS[3].w == 1134559232)) {
gl_Position.y -= 9000.0;
}
// export
passParameterSem0 = vec4(intBitsToFloat(R0i.x), intBitsToFloat(R0i.y), intBitsToFloat(R0i.x), intBitsToFloat(R0i.w));
}