Skip to main content

CXL 2.0 FMAPI Demo

·1067 words·6 mins
Grant Mackey

JRL CXL fabric manager and orchestration tool, the video!

In this video, our CEO Barrett Edwards gives an overview of Jack and CSE in action. This video demostrates how Jack and CSE can be used to dynamically allocate endpoint devices to a host using a CXL switch. For those that enjoy a good read, the transcript is below the video, enjoy!

Video Transcript

This video is a demonstration of Jack, a CLI tool for CXL fabric management and orchestration, and how it can be used to dynamically allocate endpoint devices to host using a CXL switch.

To get started today, we will first take a look at a CXL switch diagram shown here on the left.

In the diagram we have multiple hosts on the top connected to the switch, and then multiple endpoint devices connected to the switch on the bottom.

These endpoint devices are DRAM drives, but they could be other forms of accelerators.

The important point to note here is that there are multiple hosts or CPUs that connect to this CXL switch.

So none of the hosts own the shared hardware connected to the switch, which means there needs to be an external entity that owns and manages the switch and it’s devices and in CXL this is called a fabric manager.

Jack is a CLI tool that implements the CXL Fabric Management specification.

It issues management commands over MCTP that can be transported to the switch using multiple transport mechanisms such as TCP or VDM.

Jack can run on one of the hosts in this diagram or on a side band management processor such as a BMC that is connected to the switch.

For this demonstration, we will consider a switch with 10 physical ports.

Physical ports zero and nine are upstream ports connected to host CPUs that have a x16 CXL connection.

Physical ports one through 8 are configured as downstream ports with eight lanes of CXL that are connected to type 3 MLD memory devices.

So all in total there are 96 lanes, 32 upstream and 64 downstream lanes.

If there are two upstream ports, there needs to be two virtual CXL switches or VCSs.

Each VCS is conceptually a list of virtual ports that can be dynamically connected to the physical ports on the CXL switch.

Each VCS here is shown to have eight virtual ports.

One of the virtual ports on each VCS is configured as the upstream port that is connected to a physical port that is connected to an upstream device such as a CPU.

The other seven virtual ports on the VCS can then be connected to as downstream ports.

So shown here in the diagram we have the CPU attached to physical port zero bound to the virtual port zero of the VCS, and then memory devices one through 4 bound to virtual ports one through 4 on the same VCS.

Switching over to the terminal, we can first use Jack to show the status of all the ports connected to the switch with the show ports command.

The output shows that there are 10 physical ports and all of those physical ports are connected to an endpoint device.

Port zero and port nine are configured as upstream ports and are connected to type one devices which are the CPUs and have a x16 link.

Ports one through 8 are configured as downstream ports and are connected to type 3 MLD memory drives which each have a x8 link.

It is important to note that this is a physical representation of the switch, meaning what devices are plugged into each slot or what ports have a device present.

Nothing in this output shows what devices are actually configured to communicate with each other.

To know what devices can communicate with each other, we first use the show VCS command.

This command shows what physical port and what logical device or partition is bound to a virtual port on a virtual CXL switch or VCS.

In this output we can see that virtual port zero is bound to physical port zero and that port zero is the upstream port connected to the CPU.

Virtual ports one through 4 are connected to logical device zero on physical ports one through 4.

So this text-based representation effectively shows the connectivity displayed in the diagram on the left.

So let’s take a look at VCS #1 on the right side of the diagram.

When we take a look at the output, we can see that none of the virtual ports have been bound to any of the physical ports of the switch, so it is essentially unconfigured.

To configure VCS one on the right, we use the bind command to associate a virtual port on the VCS with a physical port on the switch.

We can then look at the change with the show VCS command.

Again, in this bind command, the unbound CPU attached to port nine was bound to virtual port zero of VCS #1 on the right as the upstream port.

We can also unbind a device from a virtual port on a VCS and re-bind it to the virtual port of another VCS, and after running those commands we can show the state of both virtual switches with the show VCS command.

In these commands the memory device attached to physical port four was unbound from VCS 0 and then bounds to virtual port one of VCS 1 on the right.

As we can see, the device 4 is no longer bound in VCS 0 and is therefore not accessible by the CPU in host #1 memory.

Device 4 now shows up in VCS number one on the right and is accessible by the CPU in host #2.

So bindings are not permanent, they can be changed dynamically.

Now that we have moved device 4 over, we can connect the remainder of the memory devices to VCS number one on the right.

And now with all of those bound, we can show the output with a show VCS command.

After binding all those devices, we can see in the show VCS output that all of the devices are now bound to the VCS and are accessible by the host.

This was a demonstration of how Jack, the CLI tool for CXL fabric management, can be used to dynamically allocate CXL memory devices.