Skip to main content

What is CXL Anyway?

·1567 words·8 mins
Author
Rick Vasquez
CRO
What is CXL - This article is part of a series.
Part 1: This Article

As exciting as starting a new venture is personally, other people around who know me or the other founders are quick to ask, so what is it that your startup is doing? That question poses an interesting challenge, because depending on who asks it that answer will vary substantially.

Think of this Blog as our answer to the non-technical audience, who knows about the tech industry in broad terms, focuses on the business aspects of technology, but doesn’t quite know the intricate details of specs, devices, implementations, etc.

The TL;DR Nutshell
#

At the highest of levels, Compute Express Link or CXL is a new way for computers and their discrete components to communicate. I do mean communicate in the broadest sense imaginable, whether that means inside of a single computer, between 2 computers or between a fabric composed of many computers, accelerators, memory, and storage devices.

To continue down the path of CXL I think it’s important to understand how we got here in the first place, and why CXL is even necessary as the next paradigm of compute. So we will begin where all enterprise computer stories do, at the mainframe.

The Mainframe Age 1950-1970s
#

A mainframe was (and still is) a really big computer, often times taking up whole rooms or floors in buildings. A quick way to distinguish mainframes using today’s data sizing and capacities would be to say a mainframe works at the exabyte to zettabyte level and a normal computer works at the terabyte to petabyte level. The entire computer had access to all the resources within it, like one very big system, but unlike how we think of a personal computer or a server, this big computer can run multiple virtual computers inside of it as if they were distinct systems, even though they share the same underlying hardware. Now some readers may think, aha! I’ve heard of VM’s! - Mainframes distinguish themselves from other virtualization by how tightly coupled the hardware was even though in theory there were “distinct” systems running “distinct” workloads within the mainframe. This means skipping a lot of virtualization in how to connect these machines together to pass data back and forth with purer electrical signaling vs protocol aware logical sharing. Another key differentiation with mainframes is they are designed to be mission critical, meaning you can add and remove components, capacity, and capability without having to turn the machine off. Due to this they can far exceed the capabilities of any single discrete computer instance available both from a compute availability, but also from an input/output or storage capability. Mainframes sound awesome, so why did we need to get away from them? They are incredibly expensive, complex and specialized. Over time even the most critical workloads didn’t need the scale of a mainframe and could reasonably run on a discrete system called a server.

The Server Age 1980-1990s
#

Servers are just enterprise grade, large scale compared to consumer computers. A great example of this in today’s landscape is the absolute best enthusiast class computer has around 16 cores of compute. Servers today are shipping with upwards of 256 or even higher cores per server. Servers can still talk to other servers and have expansion capabilities built in, but not to the same degree as mainframes and not until very recently were they on par with the level of configuration that was available from mainframes in the 1950’s. Servers talk to eachother through something called a network. In a consumer computer we have networks too, wifi or plugging in an ethernet cable to your home router has computers on a network. It’s much the same as comparing a neighborhood road to a freeway when we are talking consumer vs enterprise networks. The speed limits are much higher, they can fit more cars and have slightly different laws that apply to them, but you still have to drive on the road with some type of vehicle. Servers were great, they offered a smaller chunk of a powerful computer, but still were large enough to handle big important workloads when networked together. As time went on, servers started to have the same problem as mainframes - they were incredibly expensive, complex and specialized.

The Workstation/PC Age 1995-2000s
#

Enter the Workstation or Personal Computer in the consumer space. This was a computer that could still be networked with other computers, and even communicate with big servers to crunch the most important data at the right scale! These much smaller computers could be customized for a much smaller usecase and were thought of as far more general purpose with the specialization coming from the peripherals that you attached to the computer. This is where we start to introduce pluggable hardware such as USB, PCI and PCIexpress, etc. These pluggable hardware devices could let you turn a general purpose computer into a rendering machine or gaming machine or storage machine etc. This was great and allowed for optionality at a small scale, while maintaining connection to other larger computers or servers through the network.

Ubiquitous Network AKA THE INTERNET
#

At this point I think you can see a pattern starting to emerge. Computers were in need of being more and more customizable, and the more flexibility you introduced at a smaller scale the more you are reliant on the network to have all of these various computers talk to eachother. The network became ubiquitous and the internet was born. Computers continued to evolve, getting faster according to moores law, getting increased storage and memory density along the way. Then virtualization was introduced at the server, workstation and eventually consumer pc level. This means we could now run little mini computers just like a mainframe on our much smaller computers and those could all talk to one another over this incredible network called the internet.

The Early Cloud 2000s - 2015
#

This is where the cloud was born. Why buy computers when you can just rent a small computer that can talk to any other computer in the world from someone? This is incredible, because the end user and the owner of the computer up to this point had been the same company or person, the cloud introduced something entirely different - what if I could run my workloads on computers that I didn’t own and someone else owned for a fraction of the cost and only as long as I needed it! When the ownership and deployment boundaries started to diverge there became significant differences in what the hardware could be optimized for and what the end user was actually using the discrete units of compute for.

Cloud Native 2016- Now
#

We have now entered the modern era, where applications don’t live on mainframes, but a series of very small virtualized computers that are networked together. Each one of these computers has different resource demands and capabilities it requires and disaggregation of physical hardware from the logical compute. An app or database or program could need a specific amount of cores, memory, storage and network capability. We have made it full circle to mainframes, but instead of one gigantic system taking up an entire room we have datacenters and availability zones which are made up of servers all networked together. This enables workloads to live in a “serverless” state, even though the underlying hardware is still reliant on servers and the idea of distributed computing leveraging sophisticated network topologies.

The CXL Era - Future to 2030
#

Enter CXL - What if I told you that instead of speaking to eachother through a sophisticated network topology as we know and understand it today we could leverage raw electrical signaling outside of an individual machine to talk to peripherals, memory, storage much like a mainframe does, while maintaining the same advantages of unit scaling that servers and virtualization gave us. That is the paradigm that we are entering in the CXL world.

In a world of disaggregated resources which can be provisioned, deprovisioned, scaled up or down, or upgraded on the fly CXL will be the enabling technology, built on top of PCIexpress signaling and following a specification born out of trial and error over decades from some of the largest and leading hardware vendors, CXL aims to enable workloads that previously could not exist due to the limitations of both physical and logical partitions that existed outside of the mainframe. Back to our freeway analogy - CXL effectively enables the concept of shared transportation down the same highway as opposed to everyone having to have their own vehicle no matter how small or efficient, its almost always more effective to share a single vehicle with multiple seats traveling the same direction.

So What Does All This Mean?
#

One of the predeminant usecases of CXL will be enabling physically discrete computers to securely share memory of a logical application. This enables a true shift away from single points of failure or needing to rely on raw replication of data for failure domains and allows for each specific component in a computing workload to be scaled independently of eachother while maintaining the ability to share and pass data at speeds rivaling internal components and peripherals connected to the PCIexpress bus.

Stay tuned for Part II where we will discuss some basic examples of how this changes what is possible in today’s enterprise and datacenter landscape.

What is CXL - This article is part of a series.
Part 1: This Article