Plus, what's in store for the future
PCIe 4.0 motherboards are only now starting to ship to customers, but that’s not slowing down the development of this crucial peripheral connections standard. PCIe 6.0 is already on the table, with concrete improvements over the current cutting-edge standard.
Since PCIe is becoming fundamental in computers of all shapes and sizes, it’s worth talking about what PCIe is, what it’s used for, and what the new PCIe 6.0 will offer in the future.
The Basics of PCIe
PCIe is short for Peripheral Component Interconnect Express. Some of our readers who’ve been around computers for a while might remember the old PCI standard, but PCIe is to the original PCI standard as a fighter jet is to a paper airplane.
PCIe is both a protocol and a physical hardware connection standard. The most common PCIe hardware connection standard is the motherboard expansion slot. You connect expansion cards to these slots, and communication happens over the connecting pins. However, it’s possible to send PCIe protocol signals over other types of connections.
NVME SSDs using the M.2 connector can use PCIe, and this seems no different to the computer from an SSD connected through a standard PCIe slot. The Thunderbolt 3 and 4 standards also support sending PCIe signals over a cable. This is how eGPUs (external graphics cards) are possible.
PCIe devices send data in a serial fashion but across multiple, parallel lanes. An x16 PCIe slot on a computer’s motherboard can accommodate sixteen data channels at once. PCIe also offers x8, x4, and x1 slots. In general, graphics cards use the x16 slot because they need as much bandwidth as possible. While slower slots are usually physically shorter, it’s common for x16-length besides the primary one to be x8.
PCIe cards offer backward compatibility and cross-compatibility, so you can stick an x4 card in any PCIe slot that will physically accommodate it. It’s just that you’ll waste any PCIe lanes the x4 card doesn’t use. The same goes for using a PCIe 5.0 card in, for example, a 4.0 slot. It will work but be limited to the lowest common denominator.
Who Decides on the PCIe Standard?
The PCI Express standard is designed and approved by the PCI Special Interest Group (PCI-SIG), a consortium with members from the electronics and computer industry with a vested interest in the technology.
PCI-SIG was founded in 1992 as a group tasked with helping computer manufacturers correctly implement the Intel PCI standard. Today it’s a nonprofit organization with over 800 members.
The PCI-SIG board has AMD, ARM, Dell, IBM, Intel, Nvidia, Qualcomm, and more members. You might recognize these names as major computing device manufacturers, and having a shared standard makes their work much easier, not to mention the lives of their customers!
What Is PCIe Used For?
We’ve already mentioned expansion cards and SSDs above, so you’ve probably got a general idea of PCIe’s uses.
The PCIe standard connects just about any external peripheral device you can imagine. It offers a much wider bandwidth than USB, especially when looking at multiple lanes. PCIe also provides a direct path to the CPU, making it perfect for high-speed, low-latency applications.
Modern GPUs use sixteen lanes of PCIe bandwidth to maximize their performance, but not every peripheral needs that much bandwidth. The latest PCIe 4.0 SSDs use “only” four lanes, but that’s enough to blow the SATA standard clear out of the water. While SATA tops out at 600 MB/s, high-end PCIe 4.0 drives can move more than 7000 MB/s.
PCIe expansion cards also accommodate sound cards, video capture cards, 10Gb Ethernet adapter, WiFi 6 cards, Thunderbolt or USB controllers, and more. Peripherals that are integrated into your computer’s motherboard also use PCI Express. It’s just that the wiring is permanent and not in the form of a slot.
How Does PCIe 6.0 Improve on PCIe 5.0?
The headline improvement is usually a big leap in the data rate with every PCIe revision. That’s the amount of information that can be moved across the bus each second.
In that department, PCIe 6.0 does not disappoint. It fully doubles the already tremendous data transfer rate of PCIe 5.0 from 32 Gigatransfers per second (GT/s) to 64 GT/s per lane. Whereas PCIe 5.0 could shift 63 Gigabytes per second (GB/s), 6.0 can move up to 128 GB/s. That’s over an x16 connection, with more minor connections scaling down. It means an x8 PCIe 6.0 slot now has as much performance as an x16 5.0 slot.
This creates plenty of headroom for future GPUs and ultra-fast storage solutions. Not to mention incredible scope for external devices connected via PCIe or expansion cards that offer Thunderbolt and USB 4.
New Features in PCI Express 6.0
Making such a monumental performance leap in a single generation wasn’t easy. To achieve these numbers, the PCI-SIG engineers had to develop a few innovative new ways to move electrons around.
PAM4 Signaling
Quite possibly, the most significant change with PCIe 6.0 compared to previous generations of the interface is how data is encoded.
PCI Express 6.0 uses PAM4, which is short for Pulse Amplitude Modulation with four levels. If you know anything about electrical waveforms, you’ll know that the “amplitude” of the wave is how far the wave’s crest is from the baseline.
Older NRZ (Non-return-to-zero) PCIe encoding only had two amplitude levels per pulse during a clock cycle. PCIe 6 doubles that to four, increasing the amount of data encoded with each cycle.
Forward Error Correction (FEC)
While the PAM4 encoding method provides a significant boost to speeds, it also provides a big boost to bit errors. In other words, one arrives at its destination instead of a zero, and vice versa.
To combat this, PCIe 6.0 has a new Forward Error Correction feature, which checks to make sure data is getting where it should go without getting corrupted, with the help of a robust CRC (Cyclic Redundancy Check) implementation.
One danger of adding more error correction steps into the pipeline is that you’ll add more latency. Additional latency has been a growing concern with various high-speed computer components. Although they can shift more and more data, they take longer to react to a request for data, which can cause issues of its own.
FEC has been designed to target adding no more than two nanoseconds of latency compared to previous versions of PCIe, which is a tiny bit of extra latency no human can detect.
FLIT Mode
FLIT mode was another measure introduced to improve error correction in PCIe 6.0. It organizes data into units of uniform size using a dedicated onboard flow control unit. This is necessary to check packets for errors since you can apply an algorithm to each data packet and check if the packet still gives the result when it reaches the other end of the pipeline.
The thing is, it turns out that FLIT mode also brings significant efficiency gains in other places. It helps lower latency, makes bandwidth usage more efficient, and lets PCIe 6.0 do away with much of the encoding overhead from previous versions. So although PAM4 adds up to 2ns of latency, FLIT mode saves on latency in other areas.
L0p Mode
One interesting feature in PCIe 6.0 is L0p mode. This mode reduces the number of lanes a peripheral uses to send and receive data. So if your laptop is running on battery power and the GPU doesn’t need 16-lanes to do its current job, it will drop down to only using the number of lanes it needs, saving electricity by increasing power efficiency.
Should You Wait for PCIe 6.0?
If you’re thinking about buying or building a new computer soon, should you wait for PCIe 6.0 motherboards to come out first? It’s always tempting to try and build a futureproof computer. What if a new GPU or SSD comes out that needs PCIe 6.0 to reach its full potential?
The short answer to this question is that you don’t have to worry about waiting for PCIe 6.0. At the time of writing, PCIe 5.0 motherboards have only started rolling out to consumers, and even the most high-end current GPUs are nowhere near needing PCIe 5.0.
In benchmarks comparing flagship cards like the RTX 3080 or RTX 3090 running on PCIe 3.0 and 4.0, the difference in performance was somewhere between nothing and 3%. Yes, that’s right. We are only now reaching the limits of PCIe 3.0, and that’s only with the most expensive GPUs on the planet. Don’t sweat it—at least not for a few years.
Remember that PCI-SIG has only published their final PCIe specification for version 6.0 on paper. While the final specification won’t change, it will be some time before we see much hardware that supports it, at least in the consumer space.
PCIe 6.0 Benefits Data Centers Today
That’s not to say PCIe 6.0 isn’t beneficial to someone already. In the giant data centers, we all rely on cloud-based services, every extra bit of bandwidth is precious. Inside those racks of computers, you’ll find systems with dozens or hundreds of CPU cores and arrays of high-speed SSD storage. The improvements in PCIe bandwidth will immediately help take the pressure off those straining data pipes.
Having so much more bandwidth means that AI and machine learning applications could analyze more data in less time. It implies that HPC (High-Performance Computing) applications that do complex work in science, engineering, and physics can broaden their horizons.
Even IoT (Internet of Things) systems that send a flood of data to data centers to process in real-time will benefit massively from the additional bandwidth.
What Comes After PCI Express 6.0?
PCIe technology will be around for a long time unless someone invents a peripheral interconnect technology that’s radically better. Companies like Intel, AMD, and Apple are doing exciting things with the related technologies between chips inside their processor packages. With CPUs like AMD’s Ryzen and Intel’s Alder Lake stuffed to the gills with CPU cores, they need to move a tremendous amount of data. We’re sure the PCI-SIG can learn a few things from what’s happening inside these processors.