* gdbstub: fix IsMemoryBreak() returning false while connected to client
As a result, the only existing codepath for a memory watchpoint hit to break into GDB (InterpeterMainLoop, GDB_BP_CHECK, ARMul_State::RecordBreak) is finally taken,
which exposes incorrect logic* in both RecordBreak and ServeBreak.
* a blank BreakpointAddress structure is passed, which sets r15 (PC) to NULL
* gdbstub: DynCom: default-initialize two members/vars used in conditionals
* gdbstub: DynCom: don't record memory watchpoint hits via RecordBreak()
For now, instead check for GDBStub::IsMemoryBreak() in InterpreterMainLoop and ServeBreak.
Fixes PC being set to a stale/unhit breakpoint address (often zero) when a memory watchpoint (rwatch, watch, awatch) is handled in ServeBreak() and generates a GDB trap.
Reasons for removing a call to RecordBreak() for memory watchpoints:
* The``breakpoint_data`` we pass is typed Execute or None. It describes the predicted next code breakpoint hit relative to PC;
* GDBStub::IsMemoryBreak() returns true if a recent Read/Write operation hit a watchpoint. It doesn't specify which in return, nor does it trace it anywhere. Thus, the only data we could give RecordBreak() is a placeholder BreakpointAddress at offset NULL and type Access. I found the idea silly, compared to simply relying on GDBStub::IsMemoryBreak().
There is currently no measure in the code that remembers the addresses (and types) of any watchpoints that were hit by an instruction, in order to send them to GDB as "extended stop information."
I'm considering an implementation for this.
* gdbstub: Change an ASSERT to DEBUG_ASSERT
I have never seen the (Reg[15] == last_bkpt.address) assert fail in practice, even after several weeks of (locally) developping various branches around GDB. Only leave it inside Debug builds.
udp_server might not be created due to error (occupied port etc.), in which case its destructor and thread-ending call chain will not be excuted in Server::Stop. However, the ending packet still need to be send no matter udp is on or not, so move it to Server::Stop
The necessity of this parameter is dubious at best, and in 2019 probably
offers completely negligible savings as opposed to just leaving this
enabled. This removes it and simplifies the overall interface.
The comment already invalidates itself: neither MMIO nor rasterizer cache belongsHLE kernel state. This mutex has a too large scope if MMIO or cache is included, which is prone to dead lock when multiple thread acquires these resource at the same time. If necessary, each MMIO component or rasterizer should have their own lock.
To eliminate System::GetInstance usage. Archive type like SelfNCCH and SaveData changes the actual reference path for different client, so archive backend interface should accept client information from the service interface. Currently we only pass the program ID as the client information.
Added a few interfaces for adding/deleting/replacing/saving cheats. The cheats list is guarded by a std::shared_mutex, and would only need a exclusive lock when it's being updated.
I marked the `Execute` function as `const` to avoid accidentally changing the internal state of the cheat on execution, so that execution can be considered a "read" operation which only needs a shared lock.
Whether a cheat is enabled or not is now saved by a special comment line `*citra_enabled`.
To break a circular reference formed by process->handle_table->shared_memory->process. Since SharedMemory uses its owner process in the destructor, which is not kept alive by SharedMemory any more, we need to make sure that the lifetime of process is longer than the shared memory. To partially resolve this, Process now explicitly releases shared memory first in its destructor. This is with the assumtion that there is no inter-process reference to shared memory on exit, which is not true when we introduce more multi-process emulation. A TODO is left there for this, as more RE needs to be done on how 3DS handles this situation
This commit it automatically generated by command in zsh:
sed -i -- 's/BitField<\(.*\)_le>/BitField<\1>/g' **/*(D.)
BitField is now aware to endianness and default to little endian. It expects a value representation type without storage specification for its template parameter.
The V-Sync option is fundamentally broken in Citra, so let's do the same as yuzu and remove it entirely for SDL2 and at least from the frontend for QT.
(It was also only used by 7.3% of users)
std::make_unique for arrays is equivalent to doing:
std::unique_ptr<T>(new typename std::remove_extent<T>::type[size]())
(note the ending () after the array size specifier). This means that the
default value within memory for the constructed types will be whatever
the default constructor for that type does. Given the built-in
type for std::uint8_t doesn't have a constructor, this is equivalent to
forcing zero-initialization, so the memory will already be zeroed out on
construction. Because of that, there's no need to zero it out again.
Based on the `roles` payload in the JWT, the rooms will now give mod permission to Citra Community Moderators. To notify the client of its permissions, a new response, IdJoinSuccessAsMod is added, and there's now a new RoomMember::State called Moderator.