P2.3 Quality of Service (QoS): Prioritizing Time-Sensitive Traffic in PCIe

In our continued exploration of the PCIe Transaction Layer, we must address how the system handles competing types of data. Unlike legacy shared buses where data is generally handled strictly on a first-come, first-served basis, PCIe was designed from its inception to support time-sensitive transactions. For applications like streaming audio or video, data delivery must be timely in order to be useful.

Here is a look at how PCIe utilizes Traffic Classes and Virtual Channels to guarantee bandwidth for critical data.

The Problem of Time-Sensitive Data

To illustrate why prioritization is necessary, imagine a system where a video camera and a legacy SCSI storage device both need to send data to system RAM at the exact same time.

  • The Camera (Time-Critical): The video camera data is highly time-sensitive. If the PCIe transmission path cannot keep up with the required bandwidth, frames will get dropped and the captured video will appear choppy. The system must guarantee a minimum bandwidth for this stream.
  • The SCSI Device (Non-Time-Critical): Conversely, the SCSI storage data must be delivered flawlessly without errors, but the exact amount of time it takes to arrive is not critically important.

Clearly, when both a video data packet and a SCSI storage packet are waiting to be sent, the system must recognize the difference and give the video traffic a higher priority. The ability of the PCIe architecture to assign these different priorities and route packets through the topology with deterministic latencies and guaranteed bandwidth is referred to as Quality of Service (QoS).

How QoS Works: TCs and VCs

PCIe accomplishes QoS through a seamless partnership between system software and physical hardware buffers. It relies on four core elements:

1. Traffic Classes (TC) Before a packet is ever transmitted, software assigns it a specific priority level by setting a 3-bit field within the packet’s header called the Traffic Class (TC). As a general rule, assigning a higher-numbered TC to a packet gives it a higher priority within the PCIe fabric.

2. Virtual Channels (VC) While the TC is just a software label, the Virtual Channel (VC) is a physical reality. PCIe hardware ports are built with multiple independent transmit and receive buffers called Virtual Channels. When a packet enters the Transaction Layer, the hardware looks at its TC label and places the packet into the appropriate VC buffer.

3. VC and Port Arbitration Because a single port now has multiple VC buffers that might all contain packets waiting to transmit at the same time, the hardware utilizes arbitration logic to decide which VC gets to go next. By heavily favoring the VC buffer holding the highest TC packets, time-sensitive data is seamlessly pushed to the front of the line. Furthermore, PCIe Switches must utilize “Port Arbitration” to decide between competing input ports that are all trying to route data into the exact same output port.

4. Relaxed Transaction Ordering This QoS system fundamentally alters how strict transaction ordering works in the system. To prevent deadlocks, packets inside a single VC buffer generally must flow through in the exact order in which they arrived. However, because packets with different TCs can be mapped into completely separate VCs, the system natively understands that packets with different TCs have no ordering relationship to one another. This deliberate separation is what physically allows a high-priority video packet to bypass a massive queue of lower-priority data packets within the switch fabric.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top