Commits on Source (97)
-
Jonas Ådahl authored74b427f5
-
Yuri Chornoivan authored8bb74f57
-
Enrico Nicoletto authored01c09da2
-
Trần Ngọc Quân authored8cf54dfa
-
Daniel Mustieles authoredfe325e79
-
Daniel Mustieles authoredaeb6617f
-
Yaron Shahrabani authored8a563eaf
-
Fran Dieguez authoreddb8c365d
-
Danial Behzadi authored4486e57c
-
Jordi Mas authored283557fb
-
Jordi Mas authored053ae9fb
-
Yuri Chornoivan authored9c0e883f
-
Luna Jernberg authored7036203c
-
Hugo Carvalho authored7b1c3361
-
Boyuan Yang authoredbba6aeb2
-
Asier Sarasua Garmendia authoredc1991bd3
-
Matej Urbančič authored6470df28
-
Matej Urbančič authored3ca4c11b
-
Florentina Mușat authored94442ea4
-
Rafael authored9f96f097
-
Alexey Rubtsov authorede4e1a5a2
-
Daniel Mustieles authored4cbc914c
-
Andika Triwidada authoreddac24f1d
-
Andika Triwidada authored61f89cf2
-
Quentin PAGÈS authoredfb77a1fb
-
Kukuh Syafaat authoredd25acbf2
-
Fabio Tomat authoredc6adbebf
-
Pascal Nowack authored
In order to be able to handle SelectionRead requests in an async way, request_server_content_for_mime_type() needs to be adjusted to not to directly return the data. The function now becomes a void function, meaning no data returns to the calling function. To return the fetched data to the clipboard implementation, use a newly introduced vfunc. Functionality wise, nothing changes in this commit, since the backends still get their data in the same way, except that the calling function is not the same any more, as the part, that processes the returned data, is now a separate function.
60e1d251 -
Pascal Nowack authored
Reading the mime type content from a fd is a task for the clipboard. Handling the read() operation in an async way should therefore be done in grd-clipboard, as it will introduce more code, that is specific to the clipboard. So, move this part into grd-clipboard as another preparation for async read() operations. Functionality wise, nothing changes in this commit. grd_session_selection_read() will now return the read fd, instead of the read mime type content.
8105215c -
Pascal Nowack authored
Currently, reading the mime type content happens by directly calling read() on a fd. This is problematic, as it will only work, when the other end also supplies the mime type content. This is not always the case and in such cases gnome-remote-desktop will freeze with 100% cpu usage in read(). To get rid of this situation, the read() operation must happen in an async way. If a certain timeout passes, the read() operation needs to be aborted. Do this using a GTask, which runs in another thread, which then runs g_input_stream_read(), which listens on the read fd and on the cancellable fd. This allows g-r-d to stay responsive, even when an application does not supply the mime type content. When the GTask is done with the async operation, it will create a GSource and attach it to the thread, that created the GTask. Since that GSource function can also run, when the client is already gone, abort the current read operation with a GCancellable, when disposing the clipboard. If a new mime type list is advertised by the server or client, abort the current read operation and flush the current mime type content. This ensures, that the order in which all clipboard operations happen is kept, as the RDP clipboard, for example, does not define how pending FormatDataRequests are handled, when a new FormatList is advertised. If the read operation was successful, but the mime type content was not submitted yet, submit the mime type content. Otherwise, inform the backend about the abortion of the operation. For clipboard implementations that support delayed rendering of clipboard data (RDP), this ensures that the client is notified about the read result. In the case of RDP this means, that the FormatDataResponse with the fail flag is sent. When the read() operation is aborted, wait for the GTask thread function to complete. Do this using a GCond. This ensures that the read result can be retrieved without any race conditions. This also ensures that mutter won't deny any new SelectionRead() requests from gnome-remote-desktop, as the fd of the pipe is closed on gnome-remote-desktops side, before the GCond is signalled, so that mutter won't deny the request with the "reading in parallel" error. Fixes: https://gitlab.gnome.org/GNOME/gnome-remote-desktop/-/issues/60
703ea94c -
Efstathios Iosifidis authored5005b0ab
-
Pascal Nowack authored
After a very long time, Fedora finally started shipping FreeRDP >= 2.3.x stable releases, which allow now to bump the version requirement. So, do exactly that and get rid of all the HAVE_FREERDP_2_3 ifdefs.
807cc4ec -
Pascal Nowack authored
Instead of setting the OsMinorType, the OsMinorType was not set, but the OsMajorType was overwritten with an invalid value. Fix this by replacing the second OsMajorType with OsMinorType.
67c229de -
Pascal Nowack authoredd3e490ee
-
Pascal Nowack authored
In most cases, clear_instant_droppable_clip_data_entry() just clears one entry. That is true. However, the implementation does not stop after finding the first instant droppable clip data entry. So, rename this function to clear_instant_droppable_clip_data_entries. Also set the copyright year correctly, as the clipboard data locking implementation was done in 2021.
3ce85f44 -
Pascal Nowack authored
The debug output message is supposed to say "All clipDataIds used.", instead of "All clipDataIds still used.", as there was no clipDataId deletion before.
d6b141e6 -
Pascal Nowack authored
CLIPRDR does not perform stream file clipping, it performs streaming operations of file clips. So, fix that output message. Also fix some comments, to use the word "notify", instead of "inform", and fix a comment to correctly reflect the documentation ("can now be released" -> "MUST now be released"). Additionally, set the copyright year correctly, as the clipboard data locking implementation was done in 2021.
bf337e4b -
Pascal Nowack authored
When the remote desktop session ends, but there is still a list of pending mime type tables for the server, then these tables are leaked. While it is in reality very unlikely that this situation happens, it is theoretically not impossible that this situation happens. So, save the FormatListUpdateContext, like the FormatDataRequestContext on the clipboard_rdp struct to allow freeing the memory, when the associated server_format_list_update_id is cleared.
982b4db3 -
Pascal Nowack authored
Commit 982b4db3 fixes a memory leak, that can happen, if there is still a list of pending mime type tables, when the session ends. While the commit ensures that the update context is freed, it does not clear the list of pending mime type tables itself. Do this now in this commit. Use however here g_idle_add_full(), instead of g_idle_add(), as that function allows the caller to pass a destroy function, which is called, when the GSource is removed. When the source function is called, steal the mime type tables, to ensure that the pointer is NULL to avoid a double free. Fixes 982b4db3
5bb43cc1 -
Philipp Kiemle authored7b668b59
-
Marek Černocký authored2402bc23
-
Fran Dieguez authorede2f3d206
-
Zander Brown authored54097f5e
-
Pascal Nowack authored
Currently, the rdp-fuse-clipboard creates the FUSE session in the main thread and executes the FUSE loop in the FUSE thread. However, when creating the FUSE session, FUSE creates thread specific data. This thread specific data lies in the main thread, but MUST lie in the FUSE thread, as it is accessed there, especially, when the session ends. The problem, when the FUSE session ends is, that if a pending file operation is happening, then FUSE cannot forcibly abort it. Instead, it waits until the user in Nautilus confirmed the "OK" message, that tells the user, that the data is not available any more. While this situation can be worse (especially with headless sessions, as gnome-remote-desktop would effectively freeze here), it is an undefined situation, as the thread specific data is not available in the FUSE thread. To solve this situation, create the FUSE session in the FUSE thread. This ensures that the thread specific data, that FUSE creates, is created and accessed in the correct thread. Also, since fuse_session_exit() does not directly stop the FUSE session upon calling it, but merely sets an exit flag, call the stat command on the FUSE root directory. FUSE will always wait for an operation. Upon retrieving the operation, it checks the exit flag. If it is set, FUSE will exit the FUSE loop. Since the user can unmount any (FUSE) mount any time, don't directly destroy the FUSE session. Instead, wait for the main thread here. This ensures that no race conditions happen, where fuse_session_exit() might be called on a NULL pointer or similar. This waiting operation uses WinPR events and will not consume precious CPU time here. Also, do the same, when starting the FUSE session. This ensures that the FUSE session always exists, when the FUSE clipboard has been created.
5e669c29 -
Pascal Nowack authored
Normally, all FormatLists contain all announced formats only once. With xfreerdp3, this is not the case any more, since the introduction of server to client file transfer via the clipboard for xfreerdp. Since file lists are announced via the name "FileGroupDescriptorW" and their id is dynamically assigned, gnome-remote-desktop replaces the existing mime type tables of their mime type. The problem with duplicated mime types is, that gnome-remote-desktop correctly replaces the old mime type tables, if they exist, but for each freed mime type table still tries to announce them to mutter. glib realizes this situation, when creating the string variant for each type and emits a critical warning (instead of crashing). To solve this situation, use a GHashTable to ensure that mime type tables are only added once. Once all mime type tables are in the list, destroy the temporary hash table. With this handling, gnome-remote-desktop ignores all duplicated entries in a FormatList and only picks their first occurrence.
a9a3e331 -
Jiri Grönroos authored7d7077af
-
Baurzhan Muftakhidinov authored39487468
-
Jordi Mas authored2bd1575c
-
Dušan Kazik authoreddae05d4a
-
Seong-ho Cho authored8bfbeff6
-
Danial Behzadi authored2177c952
-
Nathan Follens authoredec922224
-
Aurimas Černius authored66ff47cb
-
Pascal Nowack authored
The Disconnect Provider Ultimatum PDU is supposed to be the last PDU that is sent to the client. This can only be assured, if the socket thread ends before sending this PDU. So, move the Close() call below the join call of the socket thread.
adfdb804 -
Pascal Nowack authored
Unsetting the RDP_PEER_ACTIVATED flag ensures that actions that depend on this flag won't run any more. Currently, only when the client disconnects, this flag is unset, but not, when the disconnection happened from gnome-shell. So, always unconditionally unset the RDP_PEER_ACTIVATED, when stopping the RDP session.
450ec753 -
Pascal Nowack authored
This is a preparatory step for the graphics pipeline. While in the legacy path frame updates happen directly in the graphics output buffer, that is visible to the user, the graphics pipeline handles frame updates differently: Instead of updating one chunk of area in the graphics output buffer, the graphics pipeline only updates an offscreen surface. That surface can be mapped to the user-visible graphics output buffer (to become an onscreen surface) for the usage as a monitor or window (RAIL), but is not limited to that. The graphics pipeline also allows using offscreen surfaces to e.g. composite frames with the usage of the blitting PDUs or to cache frame content using them. Each RDP surface can later correspond to a GFX surface, but does not have to, to remain compatible with the legacy path. In the next step, gnome-remote-desktop will be adapted to use RDP surfaces, when handling frame updates.
2b3e89fb -
Pascal Nowack authored
With the introduction of the RDP surface in the last commit, adapt the frame handling in session-rdp to it. RDP surfaces can be invalidated in case of e.g. a recreation is necessary. This will for example be the case, when the surface was resized, as GFX surfaces cannot be directly resized. Instead, GFX surfaces need to be recreated. This will later also force the codec to reset, in case of the frame is progressively encoded (e.g. when using H264 or RFX Progressive with TILE_FIRST + TILE_UPGRADE tiles).
2453c6bc -
Pascal Nowack authored
It is now unused and won't be needed any more.
74ad7481 -
Pascal Nowack authored
RFX and RFX Progressive are similar codecs. However, they are not the same and even when progressive encoding is not used, the encoded frame will not be the same. RFX uses for the Golomb-Rice coding the RLGR3 mode, while RFX Progressive uses the RLGR1 mode. Since, with the introduction of the graphics pipeline, both are supported, set the RLGR mode before encoding the frame data to ensure that the correct mode is used.
9b0f07cb -
Pascal Nowack authored
Add a GSource to encode the pending frame data. This GSource will then be later used to update all RDP surfaces to their latest frame. To be able to do that, also save the pending frame for an RDP surface, when the current situation does not allow updating an RDP surface, but a later situation will.
87696911 -
Pascal Nowack authored
Use the GSource, that was added in the previous commit, to encode any pending frame data, when client stops suppressing the output. This will be the case, when the user restores the RDP client or switches to the workspace, where the restored RDP client window lies. Without this commit, the user would have to trigger a new frame by e.g. moving the mouse to ensure that they received the latest frame content.
026ac0af -
Pascal Nowack authored
For different actions, like colour conversion, FreeRDP uses the FreeRDP primitives. When using the FreeRDP primitives the first time, FreeRDP runs a small benchmark. This benchmark can take up to ~310ms. Running the benchmark earlier (, when the server starts), saves that time, when the first RDP client connects. So, add a primitives_get() call, when the RDP server initializes to ensure that the benchmark won't have to run any more, when the user connects.
71b48f43 -
Pascal Nowack authored
The graphics pipeline is a dynamic channel and has its own capability exchange. This means: If the connection is activated, then the graphics pipeline will not be ready yet. The RDP_PEER_PENDING_GFX_INIT flag signals, that there is a pending capability exchange (graphics pipeline not ready yet), while the RDP_PEER_PENDING_GFX_GRAPHICS_RESET flag signals, that gnome-remote-desktop needs to submit the monitor configuration and the size of the graphics output buffer to the client first, in order to be able to submit surface updates.
eed32dc1 -
Pascal Nowack authored
Currently, gnome-remote-desktop encodes all pending frame data upon the end of the SuppressOutput PDU. When using the graphics pipeline, gnome-remote-desktop needs to be able to control the rate of the encodings. This means: gnome-remote-desktop needs to be able to suspend the encoding and continue it later. This is the case, when the client is too slow with the decoding. In that case, gnome-remote-desktop needs to immediately stop the encoding to not flood the client with too much new frame content. When the client acked enough pending frames, continue the encoding process. To be able to do that, add an API to encode the pending frame of an RDP surface.
a5292693 -
Pascal Nowack authored
This attribute indicates, when set to TRUE, that a new frame for this RDP surface should not be encoded yet. It is independent of the graphics pipeline, meaning session-rdp can later use this attribute without considering whether the graphics pipeline is supported or not. It will, however, later only be used for the graphics pipeline.
799a730d -
Pascal Nowack authored
This class will later represent a GFX surface. Add the class now, so it can be already tracked in the corresponding RDP surface.
66df97a8 -
Pascal Nowack authored
This FrameInfo struct will later be used by the graphics pipeline and the frame log of a GFX surface to rewrite the frame history.
37d9ea27 -
Pascal Nowack authored
The GFX frame log will track pending frame acks of a GFX surface and calculate the frame encoding and frame acking rate. The latter part will later be important, when dealing with the network latency, as gnome-remote-desktop cannot fully rely on the measured round trip time for handling the encoding rate, as the measured round trip time can also reflect bottlenecks on the client side. The GFX frame log is therefore also an initial step for network autodetection.
b895c7e3 -
Pascal Nowack authored096b482d
-
Pascal Nowack authored
The capability, that indicates the support for the graphics pipeline for the client or the server, is already exchanged upon connection of the RDP client. If the client indicates support for the graphics pipeline, but actually does not support it (opening the graphics pipeline fails), then the client heavily violates the protocol. In the future, gnome-remote-desktop might also require the graphics pipeline for specific use cases, like headless sessions or RAIL, as some actions like submitting alpha channel data (usually, when using RAIL), submitting the monitor configuration to the client, handling network latency can only be (easily) done with the graphics pipeline. Additionally, if the client tries to reset the graphics pipeline with a new capability exchange (CapsAdvertise), but is not allowed to, or submits capability sets, that are unsupported by gnome-remote-desktop, then gnome-remote-desktop needs to get rid of the client. So, add an API for this use case. Normally, these severe situations should not happen, but gnome-remote-desktop needs to be able to handle them if they happen.
efe8d3aa -
Pascal Nowack authored
The graphics pipeline will use this API, when the capability exchange or protocol reset is done. This will later (re)start the encoding process.
5dbed558 -
Pascal Nowack authored
The graphics pipeline will use this API later to indicate a protocol reset. This will happen, when the RDP client resets the protocol by using the CapsAdvertise PDU.
8344d5f9 -
Pascal Nowack authored
Starting with Windows 8, Microsoft revamped the graphics handling for RDP with the graphics pipeline. The graphics pipeline is a dynamic channel that redefines how frame updates are handled: 1. Frame updates don't have to happen directly on the graphics output buffer any more. Instead, surfaces are used. These surfaces can be user-visible (onscreen) surfaces, or be used in the background for different purposes, like caching, using the surface to composite frame content to another surface using SurfaceToSurface, SurfaceToCache, CacheToSurface updates. 2. Each frame update, whether it is a WireToSurface, SurfaceToSurface, etc., MUST be grouped into logical frames. These logical frames will then be used for the frame acknowledge, but can also be used for other purposes like preventing tearing. The legacy path also allows the usage of something like frame markers to mark logical frames. However, the usage of logical frames in the graphics pipeline is mandatory, as they are necessary for tracking frames, allowing the server to e.g. slow down the encoding rate, when the client is too slow with the decoding process. 3. The graphics pipeline is a dynamic channel, which allows the graphics pipeline to also run via UDP in the future. Additionally, gnome-remote-desktop won't have to care about things like the MultifragMaxRequestSize any more, when pushing updates, as the dynamic channel handles splitting up the packages itself. Things like "pushing n tiles" now, and pushing the other m tiles in another PDU won't have to happen any more, like in the legacy path. This is especially useful, when handling codecs like H264, where gnome-remote-desktop won't know how the encoded data is structured. 4. Progressive rendering: The RemoteFX Calista Progressive codec and H264 can be used for encoding content. Both codecs support the usage of progressive rendering. The server will then track the client state of the codec context and will possibly push progressive updates, which depend on the previous data that has been sent, allowing the server to reduce the bandwidth usage. 5. For gnome-remote-desktop specially this also means that the encoding thread won't have to block any more, when pushing the frame update. Instead, like in the case of the cliprdr channel, the update is pushed to a queue, which will then be pushed in the socket thread, allowing the encoding thread to save time. For the graphics pipeline, gnome-remote-desktop needs to implement the following PDUs: CapsAdvertise: The RDP client sends this PDU once the graphics pipeline has been opened. It contains the capability sets, which the client supports. Each capability set may have contained some flags, like whether H264 is supported by the client or not. This PDU can also be sent during the connection to reset the graphics pipeline. This is only possible, when the first accepted capability set was at least version RDPGFX_CAPVERSION_103. CacheImportOffer: This PDU is usually sent by the client once the capabilities have been exchanged. The client tells the server, what currently is in its offline surface cache. Not every client, however, makes use of this PDU. xfreerdp, for example, won't ever send this PDU to the server, which means that xfreerdp won't have a frame cache, that is persistent between connections. FrameAcknowledge: This PDU is important. The client sends this PDU after a logical frame has been decoded and displayed by the client. It will contain the frame id, the amount of frames, that have been decoded by the client since the last protocol reset, and the queue depth. The queue depth can either provide no information, the amount of data, that is unprocessed by the client (in bytes) or indicate, that the client suspends the frame acknowledgement. In the latter case, the server side just assumes, that the client is fast enough to handle the frame updates. The client can, however, still opt back into frame acknowledgement by sending this PDU again with the queue depth not being set to the magic value, that indicates the frame acknowledgement suspension. QoeFrameAcknowledge: This is an optional PDU, which is usually sent after the FrameAcknowledge PDU containing information, like the time, that the client needed to decode and display the frame. This PDU is normally not sent. The usage is usually for debugging purposes only. This commit implements two classes: the graphics pipeline and the GFX surface. The GFX surface corresponds to an RDP surface. It is also autonomous by also being able to decide whether frame updates for a surface should be suspended or not. The usage here is to control the encoding rate, which usually happens, if the client is too slow with the decoding and displaying process to keep up with the server. This is done with the help of the GFX frame log. By default, the GFX surface always allows one in flight frame, before it will throttle the encoding rate. The encoding rate is controlled by suspending the encoding and it is resumed with an internal GSource, which runs in the same thread as the main encoding GSource. While this commit does not consider the round trip time yet, the GFX surface already has the handling for round trip times: The mechanism for this is similar to a Schmitt trigger: By default, the GFX surface enters the throttling mode, when two or more frame acks are missing. When this limit is reached, the encoding rate is determined by the ack rate of the client, meaning the encoding rate won't surpass the ack rate. This obviously only works, if the server is constantly encoding, which is e.g. the case, when watching a video. This throttling mechanism has a specific advantage: It is independent from the round trip time. Using the round trip time in the throttling mode can have the opposite behaviour of throttling by allowing more and more frames, since the round trip time would always increase, since the client gets flooded with too many frames, which has the effect, that the client might not be able to handle the RTTResponse directly. To leave the throttling mode, the amount of pending frame acks must fall below two again. This means: Whether the GFX surface throttles the encoding rate, depends on the amount of pending frame acks. The encoding rate is in the throttling mode determined by the encoding and ack rate. In later commits, this will also consider the round trip time by increasing the activate threshold. This will then allow gnome-remote-desktop handle fast clients with low latency, slow clients with low latency, fast clients with high latency, and slow clients with high latency. By letting the GFX surface handle the throttling, instead of the graphics pipeline, the client can optimize its frame handling by e.g. letting a specific GPU handle a specific surface, in case of hardware acceleration is being used. For the frame acknowledge in the graphics pipeline, the graphics pipeline needs to be able to track the frame ids too. The RDP client is allowed to suspend or resume the frame acknowledge. mstsc, for example, makes use of that. mstsc usually opts out of frame acknowledgement after the first frame was received. Only if mstsc realizes that it cannot keep up with the server, it will opt back into frame acknowledgement. In that case the graphics pipeline needs to restore the state of the client. To do that, track the frame ids of the encoded frames, regardless whether frame acknowledgement is suspended or not. When the frame acknowledgement is suspended, push the frame info about the currently encoded frame to a queue. This queue has a limit of 1000 tracked frames. When the client opts back into frame acknowledgement, use the total frames decoded value to determine how far the client lags behind. The, that way calculated amount of pending frame acks, will then be used to unack the last n frames, where n corresponds to the amount of the pending frame acks. The GFX surfaces will then reevaluate their throttling state. Another situation that needs to handled is, when a frame ack is received in a different order: 1. frame_ack_n+1, which suspends the frame acknowledgement 2. frame_ack_n+0, which continues the frame acknowledgement In this situation, the implementation of the graphics pipeline will now take a look at the pending frame acks using the total frames decoded value, which is included in the FrameAcknowledge PDU. If the value is <= 1000, then the graphics pipeline discards the PDU, since the frame had been already auto-acked with the frame_ack_n+1 due the SUSPEND_FRAME_ACKNOWLEDGEMENT indication. If the value is > 1000, which is highly unlikely, then the PDU won't be discarded. If the client really would lag that amount of frames behind, then there is certainly something wrong with the client. In that case, gnome-remote-desktop won't discard the PDU. Currently, only RFX Progressive with TILE_SIMPLE tiles (non-progressive encoding) is supported by the current implementation of the graphics pipeline. This codec MUST be supported by the client, when using the graphics pipeline. In the future, the RFX Progressive codec will be used as fallback, when other more efficient codecs are not available (like H264).
7a3031ba -
Pascal Nowack authored
Use the previously implemented graphics pipeline class to add support for the graphics pipeline. The graphics pipeline is a dynamic channel. It will be differently initialized than the CLIPRDR channel, which is a static channel. For this, the graphics pipeline needs to wait first until the DRDYNVC channel (, which is a special static channel) is initialized, as that channel tunnels all dynamic channels.
b524234a -
Pascal Nowack authored
The refresh rate will later be used to determine the activate threshold for the throttling mechanism in the GFX surface, when also considering the round trip time.
c6cd7569 -
Pascal Nowack authored
Currently, hardcoded to 30 FPS. Use the value also for the maximum framerate in the PipeWire class.
aac4dcf5 -
Pascal Nowack authored
Extend the throttling mechanism by also considering the round trip time. The higher the round trip time, the higher the activate threshold for the throttling mechanism. This ensures that even on high latencies (up to 500ms everything works fine), gnome-remote-desktop will still be able to provide a smooth experience. The frame latency will still be delayed. This obviously cannot be changed, but the experience will stay smooth regarding any frame updates with both slow and fast clients.
cb7804cd -
Pascal Nowack authored
Since the throttling mechanism lies in the GFX surface, just pass the round trip time to the GFX surfaces.
d9fe3425 -
Pascal Nowack authored
In order to be able to implement separate classes, that handle Fast- and Slowpath PDUs, the RdpPeerContext struct needs to be moved out of the GrdSessionRdp class. Before doing that, move some elements from the RdpPeerContext struct into the GrdSessionRdp struct, that are private to the GrdSessionRdp class. This is a preparatory step for the implementation of a class, that detects the network characteristics of the RDP session, as this class will use and hook up to Fast- and Slowpath PDUs.
c15d62db -
Pascal Nowack authored
This allows the creation of separate classes for Fast- and Slowpath PDUs.
afe20213 -
Pascal Nowack authored
Starting with Windows 8, Microsoft added a few PDUs, that allow the server to detect network characteristics, such as the round trip time (RTT) or the available bandwidth. These characteristics are measured with the RTT Measure-Request/-Response and Bandwidth Measure-Start/-Stop PDUs. In order to make use of these PDUs, add a new class that hooks up to these PDUs. Currently, only RTT detection is implemented. RTT detection works by putting a sequence number to a hash table, saving the time for the sequence, sending an RTTRequest to the client with the sequence number and when the client responds with the RTTResponse, calculate the time difference between the response and the request. This RTT value will then be forwarded to a consumer, like the graphics pipeline. The graphics pipeline will forward the RTT value to the GFX surfaces, which will then use that value to calculate the activate threshold for the throttling mechanism. To smooth out spikes in the RTT value, ignore out-of-order RTTResponses and calculate the average RTT value from the RTTs of the last 500ms. Also limit the maximum RTT value to 1000ms, as RTTs above that value are extremely hard to handle, if they can be handled. The ping interval will for now always be 70ms and the RTTRequsts will only be sent, when there is an RTT consumer.
4c646d45 -
Pascal Nowack authored
Add support for autodetecting network characteristics using the previously implemented class. When the graphics pipeline is being used, add the graphics pipeline as RTT consumer to ensure a smooth experience even on high latencies. When the SuppressOutput PDU is received, there won't be any reason to emit RTTRequests, as no frame updates will happen. In that case, remove the graphics pipeline as RTT consumer, until the SuppressOutput PDU allows gnome-remote-desktop to send frame updates again.
1224260d -
Pascal Nowack authored
When no frame updates happen for some time, but the client window is not minimized, gnome-remote-desktop will currently happily still emit a lot of RTTRequests, producing unnecessary network activity. This usually results in a bandwidth usage of ~2.5KiB/s. When the graphics pipeline detects that the global encoding rate stalled at zero, it should be able to notify the network autodetection class about it, so that the network autodetection class can lower the ping interval. Lowering the ping interval, instead of stopping the detection mechanism completely, still allows gnome-remote-desktop to track the current round trip time to ensure that the throttling mechanism still has an up to date value of the round trip time, when gnome-remote-desktop continues to encode frame content. When any encoding activity happens again, the graphics pipeline will notify the network autodetection class again about the behaviour, to increase the ping interval again for a more accurate round trip time.
45a47f2c -
Pascal Nowack authored
With the API to lower or increase the ping interval in place, add a mechanism that uses the previously implemented API to change the ping interval depending on the encoding rate (and therefore for the need for new and fresh round trip times). For the mechanism, add a new GSource, which checks every second the amount of surface updates. If the amount reaches zero, destroy the GSource and notify the network autodetection class to lower the ping interval. If the encoding starts again attach the (recreated) GSource again and notify the network autodetection class to increase the ping interval.
d0c33933 -
Pascal Nowack authored
While the throttling mechanism can currently take care of higher and increasing RTT values, it can currently not take care of lowering the latency. For example: If an RTT value of 300ms is detected and the RDP client is slow with the decoding and displaying process, then gnome-remote-desktop adapts to the situation. However, if the RTT value suddenly drops to, for example, 150ms, then the frame content might still be delayed by 300ms, since the client might be too slow with the frame updates. To solve this situation, recalculate the activate threshold for the throttling mechanism, when a new frame is encoded or a frame is being acked. When the new activate threshold is lower than the current one, suspend the encoding until the client acks enough logical frames. After this, reevaluate the throttling situation. This ensures that gnome-remote-desktop uses the lowest possible activate threshold for the throttling mechanism to ensure an experience with the lowest possible latency, while still being able to handle high latency connections.
7d510f13 -
Pascal Nowack authored
The words desktop_width and desktop_height actually don't represent the situation (any more), since the damage detection mechanism handles surfaces (monitors, windows, part of monitors or windows, etc.) now, which won't necessarily represent the whole graphics output buffer.
9654a734 -
Pascal Nowack authored
This is the first step for hardware acceleration support in gnome-remote-desktop. The implementation for Hardware acceleration using NVENC and CUDA is way easier than the implementation of hardware acceleration using VAAPI. To be able to use NVENC and CUDA, use the ffnvcodec-headers. These headers, provide functions to easily load NVENC and CUDA using dlopen. NVENC itself, will be used to encode AVC420 content, which will then be pushed via the graphics pipeline. CUDA will be used to perform the BGRX to YUV420 colour conversion, as that will massively improve the performance compared to colour conversion computed on the CPU. To be able to encode AVC420 frames, the frame size needs to be aligned to the value of 16 (both width and height), and the colour format needs to be converted to YUV420. In addition to the alignment to the value of 16, NVENC seems to require the frame height to be aligned to a multiple of 64, as otherwise the resulting frame on the client may contain a black strip in the middle of the frame, when the height is not aligned to a multiple of 64. Since the FreeRDP primitives are way too slow (17-24ms for a FullHD frame) to take care of the colour conversion, use a CUDA kernel to perform this operation. This will reduce the conversion time under 400µs (313µs according to the NVIDIA Visual Profiler) on a GeForce GTX 660. The resulting image (YUV420 in the NV12 format) will then be passed to NVENC, which then encodes the frame. When using NVENC with MBAFF, NVENV requires the image to be already interlaced. If it is not interlaced, then even lines end up in the resulting image at the position y / 2, instead of y, while odd lines end up in the resulting image at the position y / 2 + aligned_height / 2, instead of y. To take care of this situation, calculate the interlaced position directly in the CUDA kernel function. The resulting image will then be correct on the client side. NVENC support was introduced with the Kepler generation. Since the CUDA toolkit removed Kepler support with version 11, and most distributions don't ship a CUDA package, ship the generated PTX code with gnome-remote-desktop. The PTX code is generated with the CUDA toolkit version 10 and will work for all Kepler and later GPUs, since PTX code is forward compatible. PTX code is not binary, meaning it is a human readable text file. CUDA code is generated in two steps: First, the PTX code: The PTX code is code, that is generated for a specific compute capability, but can also be processed by GPUs, that support a newer compute capability. However, it cannot be processed by GPUs with an older (lower) compute capability. Second, the CUDA binary: The binary that will end up on the GPU eventually. Technically, gnome-remote-desktop could ship that binary, but this is not suitable: First, it is a binary, it cannot be easily verified. Second, the binary is GPU specific, meaning gnome-remote-desktop would have to ship a fat binary to cover all GPUs, which is not suitable. The NVIDIA driver ships a JIT compiler, which can load PTX code and generate the CUDA binary at runtime. When gnome-remote-desktop starts, the NVIDIA driver will automatically use the JIT compiler to produce the device specific CUDA binary. This is a fast process and it will also load the module. gnome-remote-desktop then uses the module to perform the colour conversion, when encoding an AVC420 frame. In the future, the NVENC and CUDA implementation will be extended to be able to produce AVC444 frames. The NVENC capable GPU doesn't need to have support for AVC444 frames for this, since RDP uses a special way to create AVC444 frames (composed out of two AVC420 frames, one main view, one auxiliary view).
ab2231c1 -
Pascal Nowack authored
With the NVENC and CUDA class implemented in the last commit, add now the handling to use them to produce AVC420 frames. H264 content can only be submitted with the graphics pipeline and will only be produced, if the client supports H264. Since NVENC does not support any damage rects (only emphasis regions), always encode the frame progressively. The first frame will obviously be an IDR frame, while the subsequent frames will be progressive frames. Using H264 drastically reduces the bandwidth usage. gnome-remote-desktop uses, instead of a constant QP value, a constant quality value (of 22) to encode the frames, since using the constant QP can waste bandwidth, while the constant quality value produces the same quality, like the constant QP, but the encoder may use less bytes to produce the bitstream.
eacedf33 -
Jonas Ådahl authored4b87908e
-
gogo authored3894a033
-
Balázs Meskó authored5aeaf913
-
Piotr Drąg authored13f93335
-
Claude Paroz authored4fb5fce2
-
Emin Tufan Çetin authored7369671b
-
Emin Tufan Çetin authoredbd47fc51
-
Jonas Ådahl authored240489f3
-
Jeremy Bicha authored0943e5dd
data/README
0 → 100644
data/grd-cuda-avc-utils_30.ptx
0 → 100644
data/meson.build
0 → 100644
po/LINGUAS
0 → 100644
po/POTFILES.in
0 → 100644
po/ca.po
0 → 100644
po/cs.po
0 → 100644
po/de.po
0 → 100644
po/el.po
0 → 100644
po/en_GB.po
0 → 100644
po/es.po
0 → 100644
po/eu.po
0 → 100644
po/fa.po
0 → 100644
po/fi.po
0 → 100644
po/fr.po
0 → 100644
po/fur.po
0 → 100644