SPDX standardizes how source code conveys its copyright and licensing
information. See https://spdx.github.io/spdx-spec/1-rationale/ . SPDX
tags are adopted in many large projects, including things like the Linux
kernel.
When I added the software FMA path in 2c38d64 and made us use
it when determinism is enabled, I was assuming that either the
performance impact of software FMA wouldn't be too large or CPUs
that were too old to have FMA instructions were too slow to run
Dolphin well anyway. This was wrong. To give an example, the
netplay performance went from 60 FPS to 30 FPS in one case.
This change makes netplay clients negotiate whether FMA should
be used. If all clients use an x64 CPU that supports FMA, or
AArch64, then FMA is enabled, and otherwise FMA is disabled.
In other words, we sacrifice accuracy if needed to avoid massive
slowdown, but not otherwise. When not using netplay, whether to
enable FMA is simply based on whether the host CPU supports it.
The only remaining case where the software FMA path gets used
under normal circumstances is when an input recording is created
on a CPU with FMA support and then played back on a CPU without.
This is not an especially common scenario (though it can happen),
and TASers are generally less picky about performance and more
picky about accuracy than other users anyway.
With this change, FMA desyncs are avoided between AArch64 and
modern x64 CPUs (unlike before 2c38d64), but we do get FMA
desyncs between AArch64 and old x64 CPUs (like before 2c38d64).
This desync can be avoided by adding a non-FMA path to JitArm64 as
an option, which I will wait with for another pull request so that
we can get the performance regression fixed as quickly as possible.
https://bugs.dolphin-emu.org/issues/12542
Instead of comparing the game ID, revision, disc number and name,
we can compare a hash of important parts of the disc including
all the aforementioned data but also additional data such as the
FST. The primary reason why I'm making this change is to let us
catch more desyncs before they happen, but this should also fix
https://bugs.dolphin-emu.org/issues/12115. As a bonus, the UI can
now distinguish the case where a client doesn't have the game at
all from the case where a client has the wrong version of the game.
The logic didn't account for the case where a player leaves, so the
host would be left in a dangling state where the UI is disabled but the
game won't start, requiring a full restart of Dolphin to fix.
This is an extension of host input authority that allows switching the
host (who has zero latency) on the fly, at the further expense of
everyone else's latency. This is useful for turn-based games where the
latency of players not on their turn doesn't matter.
To become the so-called golfer, the player simply presses a hotkey.
When the host is the golfer, latency is identical to normal host input
authority.
The implementation of peer initialization would hang if the initial
packet was never received. This fixes that issue by deferring the
initialization to the packet receive loop.
This sends arbitrary packets in chunks to be reassembled at the other
end, allowing large data transfers to be speed-limited and interleaved
with other packets being sent. It also enables tracking the progress of
large data transfers.
Its usage was inconsistent, confusing, and buggy, so I opted to just
remove it entirely. It has been replaced with PadIndex for the
appropriate instances (mainly networking), and inappropriate usages
(where it was really just a player ID) have been replaced with the
PlayerId type. The definition of "no mapping" has been changed from -1
to 0 to match the defintion of "no player", as -1 (255 unsigned) is
actually a valid player ID.
The bugs never manifested because it only occurs with a full lobby of
255 players, at which point the last player's ID collides with the "no
mapping" definition and some undefined behavior occurs. Nevertheless, I
thought it best to fix it anyways as the usage of PadMapping was
confusing.
Adds a tickbox to the server's window to syncronize codes. Codes
are temporarily sent to each client and are used for the duration of the
session.
Saves the "sync codes" tickbox as per PR Netplay: Properly save hosting
settings #7483
Currently, each player buffers their own inputs and sends them to the
host. The host then relays those inputs to everyone else. Every player
waits on inputs from all players to be buffered before continuing. What
this means is all clients run in lockstep, and the total latency of
inputs cannot be lower than the sum of the 2 highest client ping times
in the game (in 3+ player sessions with people across the world, the
latency can be very high).
Host input authority mode changes it so players no longer buffer their
own inputs, and only send them to the host. The host stores only the
most recent input received from a player. The host then sends inputs
for all pads at the SI poll interval, similar to the existing code. If
a player sends inputs to slowly, their last received input is simply
sent again. If they send too quickly, inputs are dropped. This means
that the host has full control over what inputs are actually read by
the game, hence the name of the mode. Also, because the rate at which
inputs are received by SI is decoupled from the rate at which players
are sending inputs, clients are no longer dependent on each other. They
only care what the host is doing. This means that they can set their
buffer individually based on their latency to the host, rather than the
highest latency between any 2 players, allowing someone with lower ping
to the host to have less latency than someone else.
This is a catch to this: as a necessity of how the host's input sending
works, the host has 0 latency. There isn't a good way to fix this, as
input delay is now solely dependent on the real latency to the host's
server. Having differing latency between players would be considered
unfair for competitive play, but for casual play we don't really care.
For this reason though, combined with the potential for a few inputs to
be dropped on a bad connection, the old mode will remain and this new
mode is entirely optional.
Behaviorally, this belongs within the netplay client. The server will
always transmit a known RTC value, so it doesn't even need a global for
this. Given the client receives the packet containing said RTC value, we can
store it as a member variable and provide an accessor for reading that
value.
This removes another global variable within the netplay code.
Most settings which affect determinism will now be synced on NetPlay.
Additionally, there's a strict sync mode which will sync various
enhancements to prevent desync in games that use EFB reads.
This also adds a check for all players having the IPL.bin file, and
doesn't load it for anyone if someone is missing it. This prevents
desyncs caused by mismatched system fonts.
Additionally, the NetPlay window was getting too wide with checkboxes,
so FlowLayout has been introduced to make the checkboxes take up
multiple rows dynamically. However, there's some minor vertical
centering issues I haven't been able to solve, but it's better than a
ridiculously wide window.
This adds the functionality of sending the host's save data (raw memory
cards, as well as GCI files and Wii saves with a matching GameID) to
all other clients. The data is compressed using LZO1X to greatly reduce
its size while keeping compression/decompression fast. Save
synchronization is enabled by default, and toggleable with a checkbox
in the NetPlay dialog.
On clicking start, if the option is enabled, game boot will be delayed
until all players have received the save data sent by the host. If any
player fails to receive it properly, boot will be cancelled to prevent
desyncs.
Previously there was only one function under the NetPlay namespace,
which is kind of silly considering we have all of these other types
and functions existing outside of the namespace.
This moves the rest of them into the namespace.
This gets some general names, like Player, for example, out of the global namespace.
Since all queues are FIFO data structures, the name wasn't informative
as to why you'd use it over a normal queue. I originally thought it had
something to do with the hardware graphics FIFO.
This renames it using the common acronym SPSC, which stands for
single-producer single-consumer, and is most commonly used to talk about
lock-free data structures, both of which this is.
UPNP_AddPortMapping needs our IP address, however enet_address_get_host
will return 0.0.0.0 or a host name in most cases.
This gets our IP address from the socket to the IGD.
Note - I removed a SleepCurrentThread(1) the patch added which seemed to
be unrelated to the actual job at hand. If there was a real need for it
(which sounds like it would be an enet-related bug - enet_host_service
is supposed to *sleep*), that needs to be dealt with...
Replaces them with forward declarations of used types, or removes them entirely if they aren't used at all. This also replaces certain Common headers with less inclusive ones (in terms of definitions they pull in).
Add std::unique_ptr<sf::Packet> objects to a queue instead of functions,
makes things easier to read, and avoids headaches while checking the
lifetime of the concerned objects.