This is done entirely through interpreter fallbacks. It would
probably be possible to implement this using host exception
handlers instead, but I think it would be a lot of complexity
for a rarely used feature, so let's not do it for now.
For performance reasons, there are two settings for this feature:
One setting which does enables just what True Crime: New York City
needs and one setting which enables it all. The latter makes
almost all float instructions fall back to the interpreter.
Now that we have enum helpers for inserting values into packets and have
migrated all other enumerations over, there's no need to keep this alias
around any longer.
SPDX standardizes how source code conveys its copyright and licensing
information. See https://spdx.github.io/spdx-spec/1-rationale/ . SPDX
tags are adopted in many large projects, including things like the Linux
kernel.
When I added the software FMA path in 2c38d64 and made us use
it when determinism is enabled, I was assuming that either the
performance impact of software FMA wouldn't be too large or CPUs
that were too old to have FMA instructions were too slow to run
Dolphin well anyway. This was wrong. To give an example, the
netplay performance went from 60 FPS to 30 FPS in one case.
This change makes netplay clients negotiate whether FMA should
be used. If all clients use an x64 CPU that supports FMA, or
AArch64, then FMA is enabled, and otherwise FMA is disabled.
In other words, we sacrifice accuracy if needed to avoid massive
slowdown, but not otherwise. When not using netplay, whether to
enable FMA is simply based on whether the host CPU supports it.
The only remaining case where the software FMA path gets used
under normal circumstances is when an input recording is created
on a CPU with FMA support and then played back on a CPU without.
This is not an especially common scenario (though it can happen),
and TASers are generally less picky about performance and more
picky about accuracy than other users anyway.
With this change, FMA desyncs are avoided between AArch64 and
modern x64 CPUs (unlike before 2c38d64), but we do get FMA
desyncs between AArch64 and old x64 CPUs (like before 2c38d64).
This desync can be avoided by adding a non-FMA path to JitArm64 as
an option, which I will wait with for another pull request so that
we can get the performance regression fixed as quickly as possible.
https://bugs.dolphin-emu.org/issues/12542
Specifically, 'Scooby-Doo! Mystery Mayhem', 'Scooby-Doo! Unmasked', 'Ed, Edd n Eddy: The Mis-Edventures', and the Wii version of 'Happy Feet'.
The JIT cache causes problems with emulated icache invalidation in these games, resulting in areas failing to load.
With the SI poll line count fixed, pretty much all games are polling
twice per frame anyways, making this option superfluous. Since it's a
bit of a gross hack and makes DTMs incompatible with console, let's
just bin it.
This new setting is like Override Language on NTSC Games, except
instead of only applying to the GameCube language setting,
it also applies to the Wii language setting.
Fixes https://bugs.dolphin-emu.org/issues/11299
The logic didn't account for the case where a player leaves, so the
host would be left in a dangling state where the UI is disabled but the
game won't start, requiring a full restart of Dolphin to fix.
This is an extension of host input authority that allows switching the
host (who has zero latency) on the fly, at the further expense of
everyone else's latency. This is useful for turn-based games where the
latency of players not on their turn doesn't matter.
To become the so-called golfer, the player simply presses a hotkey.
When the host is the golfer, latency is identical to normal host input
authority.
Small addition of NetPlay code in Core.cpp was needed to set the
extensions at the right time, as init would override them otherwise.
This solution is more elegant than modifying the user's INI files on
game start.
This sends arbitrary packets in chunks to be reassembled at the other
end, allowing large data transfers to be speed-limited and interleaved
with other packets being sent. It also enables tracking the progress of
large data transfers.
Its usage was inconsistent, confusing, and buggy, so I opted to just
remove it entirely. It has been replaced with PadIndex for the
appropriate instances (mainly networking), and inappropriate usages
(where it was really just a player ID) have been replaced with the
PlayerId type. The definition of "no mapping" has been changed from -1
to 0 to match the defintion of "no player", as -1 (255 unsigned) is
actually a valid player ID.
The bugs never manifested because it only occurs with a full lobby of
255 players, at which point the last player's ID collides with the "no
mapping" definition and some undefined behavior occurs. Nevertheless, I
thought it best to fix it anyways as the usage of PadMapping was
confusing.
Adds a tickbox to the server's window to syncronize codes. Codes
are temporarily sent to each client and are used for the duration of the
session.
Saves the "sync codes" tickbox as per PR Netplay: Properly save hosting
settings #7483
Currently, each player buffers their own inputs and sends them to the
host. The host then relays those inputs to everyone else. Every player
waits on inputs from all players to be buffered before continuing. What
this means is all clients run in lockstep, and the total latency of
inputs cannot be lower than the sum of the 2 highest client ping times
in the game (in 3+ player sessions with people across the world, the
latency can be very high).
Host input authority mode changes it so players no longer buffer their
own inputs, and only send them to the host. The host stores only the
most recent input received from a player. The host then sends inputs
for all pads at the SI poll interval, similar to the existing code. If
a player sends inputs to slowly, their last received input is simply
sent again. If they send too quickly, inputs are dropped. This means
that the host has full control over what inputs are actually read by
the game, hence the name of the mode. Also, because the rate at which
inputs are received by SI is decoupled from the rate at which players
are sending inputs, clients are no longer dependent on each other. They
only care what the host is doing. This means that they can set their
buffer individually based on their latency to the host, rather than the
highest latency between any 2 players, allowing someone with lower ping
to the host to have less latency than someone else.
This is a catch to this: as a necessity of how the host's input sending
works, the host has 0 latency. There isn't a good way to fix this, as
input delay is now solely dependent on the real latency to the host's
server. Having differing latency between players would be considered
unfair for competitive play, but for casual play we don't really care.
For this reason though, combined with the potential for a few inputs to
be dropped on a bad connection, the old mode will remain and this new
mode is entirely optional.