The 1-minute video appearing below shows two 56Gbps, PAM-4 demos from the recent OFC 2017 conference. The first demo shows a CEI-56G-MR (medium-reach, 50cm, chip-to-chip and low-loss backplane) connection between a Xilinx 56Gbps PAM-4 test chip communicating through a QSFP module over a cable to a Credo device. A second PAM-4 demo using CEI-56G-LR (long-reach, 100cm, backplane-style) interconnect shows a Xilinx 56Gbps PAM-4 test chip communicating over a Molex backplane to a Credo device, which is then communicating with a GlobalFoundries device over an FCI backplane, which is then communicating over a TE backplane back to the Xilinx device. This second demo illustrates the growing, multi-company ecosystem supporting PAM-4.
For more information about the Xilinx PAM-4 test chip, see “3 Eyes are Better than One for 56Gbps PAM4 Communications: Xilinx silicon goes 56Gbps for future Ethernet,” and “Got 90 seconds to see a 56Gbps demo with an instant 2x upgrade from 28G to 56G backplane? Good!”
Looking for a relatively painless overview of the current state of the art for high-speed Ethernet used in data centers and for telecom? You should take a look at this just-posted, 30-minute video of a panel discussion at OFC2017 titled “400GE from Hype to Reality.” The panel members included:
Gustlin starts by discussing the history of 400GbE’s development, starting with a study group organized in 2013. Today, the 400GbE spec is at draft 3.1 and the plan is to produce a final standard by December 2017.
Booth answers a very simple question in his talk: “”Yes, we ill” use 400GbE in the data center. He then proceeds to give a fairly detailed description of the data centers and networking used to create Microsoft’s Azure cloud-computing platform.
Ofelt describes the genesis of the 400GbE standard. Prior to 400G, says Ofelt, system vendors worked with end users (primarily telecom companies) to develop faster Ethernet standards. Once a standard appeared, ther would be a deployment ramp. Although 400GbE development started that way, the people building hyperscale data centers sort of took over and they want to deploy 400GbE at scale, ASAP.
Don’t be fooled by the title of this panel. There’s plenty of discussion about 25GbE through 100GbE and 200GbE as well, so if you’re needing a quick update on high-speed Ethernet’s status, this 30-minute video is for you.
Samtec recorded a demo of its FireFly FQSFP twinax cable assembly carrying four 28Gbps lanes from a Xilinx Virtex UltraScale+ VU9P FPGA on a VCU118 eval board to a QSFP optical cage at the recent OFC 2017 conference in Los Angeles. (The Virtex UltraScale+ VU9P FPGA has 120 GTY transceivers capable of 32.75Gbps operation and the VCU118 eval kit includes the Samtec FireFly daughtercard with cable assembly.) Samtec’s FQSFP assembly plugs mid-board into a FireFly connector on the VCU118 board. The 28Gbps signals then “fly over” the board through to the QSFP cage and loop back over the same path, where they are received back into the FPGA. The demonstration shows 28Gbps performance on all four links with zero bit errors.
As explained in the video, the advantage to using the Samtec FireFly flyover system is that it takes the high-speed 28Gbps signals out of the pcb-design equation, making the pcb easier to design and less expensive to manufacture. Significant savings in pcb manufacturing cost can result for large board designs, which no longer need to deal with signal-integrity issues and controlled-impedance traces for such high-speed routes.
Samtec has now posted the 2-minute video from OFC 2017 on YouTube and here it is:
Note: Martin Rowe recently published a related technical article about the Samtec FireFly system titled "High-speed signals jump over PCB traces" on the EDN.com Web site.
Here’s a 90-second video showing a 56Gbps Xilinx test chip with a 56Gbps PAM4 SerDes transceiver operating with plenty of SI margin and better than 10-12 error rate over a backplane originally designed for 28Gbps operation.
Note: This working demo employs a Xilinx test chip. The 56Gbps PAM4 SerDes is not yet incorporated into a product. Not yet.
For more information about this test chip, see “3 Eyes are Better than One for 56Gbps PAM4 Communications: Xilinx silicon goes 56Gbps for future Ethernet.”
Today, Xilinx posted information about the new $2995 Kintex UltraScale+ KCU116 Eval Kit on Xilinx.com. If you’re looking to get into the UltraScale+ FPGAs’ GTY transceiver races—to 32.75Gbps—this is a great kit to start with. The kit includes:
Here’s a nice shot of the KCU116 board from the kit’s quickstart guide:
Kintex UltraScale+ KCU116 Eval Board
One of the key features of this board are the four SFP+ optical cages there on the left. Those handle 25Gbps optical modules, driven of course by four of the KU5P FPGA’s GTY transceivers.
Take a look.
InnoRoute has just started shipping its TrustNode extensible, ultra-low-latency (2.5μsec) IPv6 OpenFlow SDN router as a pcb-level product. The design combines a 1.9GHz, quad-core Intel Atom processor running Linux with a Xilinx FPGA to implement the actual ultra-low-latency router hardware. (You’re not implementing that as a Linux app running on an Atom processor!) The TrustNode Router reference design features twelve GbE ports. Here’s a photo of the TrustNode SDN Router board:
InnoRoute TrustNode SDN Router Board with 12 GbE ports
Based on the pcb layout in the photo, it appears to me that the Xilinx FPGA implementing the 12-port SDN router is under that little black heatsink in the center of the board nearest to all of the Ethernet ports while the quad-core processor running Linux must be sitting there in the back under that great big silver heatsink with an auxiliary cooling fan, near the processor-associated USB ports and SDcard carrier.
InnoRoute’s TrustNode Web page is slightly oblique as to which Xilinx FPGA is used in this design but the description sort of winnows the field. First, the description says that you can customize InnoRoute’s TrustNode router design using the Xilinx Vivado HL Design Suite WebPACK Edition—which you can download at no cost—so we know that the FPGA must be a 28nm series 7 device or newer. Next, the description says that the design uses 134.6k LUTs, 269.2k flip-flops, and 12.8Mbits of BRAM. Finally, we see that the FPGA must be able to handle twelve Gigabit Ethernet ports.
The Xilinx FPGA that best fits this description is an Artix-7 A200.
You can use this TrustNode board to jump into the white-box SDN router business immediately, or at least as fast as you can mill and drill an enclosure and screen your name on the front. In fact, InnoRoute has kindly created a nice-looking rendering of a suggested enclosure design for you:
InnoRoute TrustNode SDN Router (rendering)
The router’s implementation as IP in an FPGA along with the InnoRoute documentation and the Vivado tools mean that you can enhance the router’s designs and add your special sauce to break out of the white box. (White Box Plus? White Box Permium? White Box Platinum? Hey, I’m from marketing and I’m here to help.)
This design enhancement and differentiation are what Xilinx All Programmable devices are especially good at delivering. You are not stuck with some ASSP designer’s concept of what your customers need. You can decide. You can differentiate. And you will find that many customers are willing to pay for that differentiation.
Note: Please contact InnoRoute directly for more information on the TrustNode SDN Router.
Next week at OFC 2017 in Los Angeles, Acacia Communications, Optelian, Precise-ITC, Spirent, and Xilinx will present the industry’s first interoperability demo supporting 200/400GbE connectivity over standardized OTN and DWDM. Putting that succinctly, the demo is all about packing more bits/λ, so that you can continue to use existing fiber instead of laying more.
Callite-C4 400GE/OTN Transponder IP from Precise-ITC instantiated in a Xilinx Virtex UltraScale+ VU9P FPGA will map native 200/400GbE traffic—generated by test equipment from Spirent—into 2x100 and 4x100 OTU4-encapsulated signals. The 200GbE and 400GbE standards are still in flux, so instantiating the Precise-ITC transponder IP in an FPGA allows the design to quickly evolve with the standards with no BOM or board changes. Concise translation: faster time to market with much less risk.
Callite-C4 400GE/OTN Transponder IP Block Diagram
Optelian’s TMX-2200 200G muxponder, scheduled for release later this year, will muxpond the OTU4 signals into 1x200Gbps or 2x200Gbps DP-16QAM using Acacia Communications’ CFP2-DCO coherent pluggable transceiver.
The Optelian and Precise-ITC exhibit booths at OFC 2017 are 4139 and 4141 respectively.
Next week at the OFC Optical Networking and Communication Conference & Exhibition in Los Angeles, Xilinx will be in the Ethernet Alliance booth demonstrating the industry’s first, standard-based, multi-vendor 400GE network. A 400GE MAC and PCS instantiated in a Xilinx Virtex UltraScale+ VU9P FPGA will be driving a Finisar 400GE CFP8 optical module, which in turn will communicate with a Spirent 400G test module over a fiber connection.
In addition, Xilinx will be demonstrating:
If you’re visiting OFC, be sure to stop by the Xilinx booth (#1809).
Berten DSP’s GigaX API for the Xilinx Zynq SoC creates a high-speed, 200Mbps full-duplex communications channel between a GbE port and the Zynq SoC’s PS (programmable logic) through an attached SDRAM buffer and an AXI DMA controller IP block. Here’s a diagram to clear up what’s happening:
The software API implements IP filtering and manages TCP/UDP headers, which help you implement a variety of hardware-accelerated Ethernet systems including Ethernet bridges, programmable network nodes, and network offload appliances. Here’s a performance curve illustrating the kind of throughput you can expect:
Please contact Berten DSP directly for more information about the GigaX API.
As the BittWare video below explains, CPUs are simply not able to process 100GbE packet traffic without hardware acceleration. BittWare’s new Streamsleuth, to be formally unveiled at next week’s RSA Conference in San Francisco (Booth S312), adroitly handles blazingly fast packet streams thanks to a hardware assist from an FPGA. And as the subhead in the title slide of the video presentation says, StreamSleuth lets you program its FPGA-based packet-processing engine “without the hassle of FPGA programming.”
(Translation: you don’t need Verilog or VHDL proficiency to get this box working for you. You get all of the FPGA’s high-performance goodness without the bother.)
That said, as BittWare’s Network Products VP & GM Craig Lund explains, this is not an appliance that comes out of the box ready to roll. You need (and want) to customize it. You might want to add packet filters, for example. You might want to actively monitor the traffic. And you definitely want the StreamSleuth to do everything at wire-line speeds, which it can. “But one thing you do not have to do, says Lund, “is learn how to program an FPGA.” You still get the performance benefits of FPGA technology—without the hassle. That means that a much wider group of network and data-center engineers can take advantage of BittWare’s StreamSleuth.
As Lund explains, “100GbE is a different creature” than prior, slower versions of Ethernet. Servers cannot directly deal with 100GbE traffic and “that’s not going to change any time soon.” The “network pipes” are now getting bigger than the server’s internal “I/O pipes.” This much traffic entering a server this fast clogs the pipes and also causes “cache thrash” in the CPU’s L3 cache.
Sounds bad, doesn’t it?
What you want is to reduce the network traffic of interest down to something a server can look at. To do that, you need filtering. Lots of filtering. Lots of sophisticated filtering. More sophisticated filtering than what’s available in today’s commodity switches and firewall appliances. Ideally, you want a complete implementation of the standard BPF/pcap filter language running at line rate on something really fast, like a packet engine implemented in a highly parallel FPGA.
The same thing holds true for attack mitigation at 100Gbe line rates. Commodity switching hardware isn’t going to do this for you at 100GbE (10GbE yes but 100GbE, “no way”) and you can’t do it in software at these line rates. “The solution is FPGAs” says Lund, and BittWare’s StreamSleuth with FPGA-based packet processing gets you there now.
Software-based defenses cannot withstand Denial of Service (DoS) attacks at 100GbE line rates. FPGA-accelerated packet processing can.
So what’s that FPGA inside of the BittWare Streamsleuth doing? It comes preconfigured for packet filtering, load balancing, and routing. (“That’s a Terabit router in there.”) To go beyond these capabilities, you use the BPF/pcap language to program your requirements into the the StreamSleuth’s 100GbE packet processor using a GUI or APIs. That packet processor is implemented with a Xilinx Virtex UltraScale+ VU9P FPGA.
Here’s what the guts of the BittWare StreamSleuth look like:
And here’s a block diagram of the StreamSleuth’s packet processor:
The Virtex UltraScale+ FPGA resides on a BittWare XUPP3R PCIe board. If that rings a bell, perhaps you read about that board here in Xcell Daily last November. (See “BittWare’s UltraScale+ XUPP3R board and Atomic Rules IP run Intel’s DPDK over PCIe Gen3 x16 @ 150Gbps.”)
Finally, here’s the just-released BittWare StreamSleuth video with detailed use models and explanations:
For more information about the StreamSleuth, contact BittWare directly or go see the company’s StreamSleuth demo at next week’s RSA conference. For more information about the packet-processing capabilities of Xilinx All Programmable devices, click here. And for information about the new Xilinx Reconfigurable Acceleration Stack, click here.
Accolade’s newly announced ATLAS-1000 Fully Integrated 1U OEM Application Acceleration Platform pairs a Xilinx Kintex UltraScale KU060 FPGA on its motherboard with an Intel x86 processor on a COM Express module to create a network-security application accelerator. The ATLAS-1000 platform integrates Accolade’s APP (Advanced Packet Processor), instantiated in the Kintex UltraScale FPGA, which delivers acceleration features for line-rate packet processing including lossless packet capture, nanosecond-precision packet timestamping, packet merging, packet filtering, flow classification, and packet steering. The platform accepts four 10G SFP+ or two 40G QSFP pluggable optical modules. Although the ATLAS-1000 is designed as a flow-through security platform, especially for bump-in-the-wire applications, there’s also 1Tbyte worth of on-board local SSD storage.
Accolade Technology's ATLAS-1000 Fully Integrated 1U OEM Application Acceleration Platform
Here’s a block diagram of the ATLAS-1000 platform:
All network traffic enters the FPGA-based APP for packet processing. Packet data is then selectively forwarded to the x86 CPU COM Express module depending on the defined application policy.
Please contact Accolade Technology directly for more information about the ATLAS-1000.
Aquantia has packed its Ethernet PHY—capable of operating at 10Gbps over 100m of Cat 6a cable (or 5Gbps down to 100Mbps over 100m of Cat 5e cable)—with a Xilinx Kintex-7 FPGA, creating a universal Gigabit Ethernet component with extremely broad capabilities. Here’s a block diagram of the new AQLX107 device:
This Aquantia device gives you a space-saving, one-socket solution for a variety of Ethernet designs including controllers, protocol converters, and anything-to-Ethernet bridges.
Please contact Aquantia for more information about this unique Ethernet chip.
Do you have a big job to do? How about a terabit router bristling with optical interconnect? Maybe you need a DSP monster for phased-array radar or sonar. Beamforming for advanced 5G applications using MIMO antennas? Some other high-performance application with mind-blowing processing and I/O requirements?
You need to look at Xilinx Virtex UltraScale+ FPGAs with their massive data-flow and routing capabilities, massive memory bandwidth, and massive I/O bandwidth. These attributes sweep away design challenges caused by performance limits of lesser devices.
Now you can quickly get your hands on a Virtex UltraScale+ Eval Kit so you can immediately start that challenging design work. The new eval kit is the Xilinx VCU118 with an on-board Virtex UltraScale+ VU9P FPGA. Here’s a photo of the board included with the kit:
Xilinx VCU118 Eval Board with Virtex UltraScale+ VU9P FPGA
The VCU118 eval kit’s capabilities spring from the cornucopia of on-chip resources provided by the Virtex UltraScale+ VU9P FPGA including:
If you can’t build what you need with the VCU118’s on-board Virtex UltraScale+ VU9P FPGA—and it’s sort of hard to believe that’s even possible—just remember, there are even larger parts in the Virtex UltraScale+ FPGA family.
Think you can design the lowest-latency network switch on the planet? That’s the challenge of the NetFPGA 2017 design contest. You have until April 13, 2017 to develop a working network switch using the NetFPGA SUME dev board, which is based on a Xilinx Virtex-7 690T FPGA. Contest details are here. (The contest started on November 16.)
Competing designs will be evaluated using OSNT, an Open Source Network Tester, and testbenches will be available online for users to experiment and independently evaluate their design. The competition is open to students of all levels (undergraduate and postgraduate) as well as to non-students. Winners will be announced at the NetFPGA Developers Summit, to be held on Thursday, April 20 through Friday, April 21, 2017 in Cambridge, UK.
Note: There is no need to own a NetFPGA SUME platform to take part in the competition because the competition offers online access to one. However, you may want one for debugging purposes because there’s no online debug access to the online NetFPGA SUME platform. (NetFPGA SUME dev boards are available from Digilent. Click here.)
NetFPGA SUME Board (available from Digilent)
Intel’s DPDK (Data Plane Development Kit) is a set of software libraries that improves packet processing performance on x86 CPU hosts by as much as 10x. According to Intel, its DPDK plays a critical role in SDN and NFV applications. Last week at SC16 in Salt Lake City, BittWare demonstrated Intel’s DPDK running on a Xeon CPU and streaming packets over a PCIe Gen3 x16 interface at an aggregate rate of 150Gbps (transmit + receive) to and from BittWare’s new XUPP3R PCIe board using Atomic Rules’ Arkville DPDK-aware data mover IP instantiated in the 16nm Xilinx Virtex UltraScale+ VU9P FPGA on Bittware’s board. The Arkville DPDK-aware data mover marshals packets between the IP block implemented in the FPGA’s programmable logic and the CPU host's memory using the Intel DPDK API/ABI. Atomic Rule’s Arkville IP plus a high-speed MAC looks like a line-rate-agnostic, bare-bones L2 NIC.
BittWare’s XUPP3R PCIe board with an on-board Xilinx Virtex UltraScale+ VU9P FPGA
Here’s a very short video of BittWare’s VP of Systems & Solutions Ron Huizen explaining his company’s SC16 demo:
Here’s an equally short video made by Atomic Rules with a bit more info:
If this all looks vaguely familiar, perhaps you’re remembering an Xcell Daily post that appeared just last May where BittWare demonstrated an Atomic Rules UDP Offload Engine running on its XUSP3S PCIe board, which is based on a Xilinx Virtex UltraScale VU095 FPGA. (See “BittWare and Atomic Rules demo UDP Offload Engine @ 25 GbE rates; BittWare intros PCIe Networking card for 4x 100 GbE.”) For the new XUPP3R PCIe board, BittWare has now jumped from the 20nm Virtex UltraScale FPGAs to the latest 16nm Virtex UltraScale+ FPGAs.
Today, Xilinx announced four members of a new Virtex UltraScale+ HBM device family that combines high-performance 16nm Virtex UltraScale+ FPGAs with 32 or 64Gbits of HBM (high-bandwidth memory) DRAM in one device. The resulting devices deliver a 20x improvement in memory bandwidth relative to DDR SDRAM—more than enough to keep pace with the needs of 400G Ethernet, multiple 8K digital-video channels, or high-performance hardware acceleration for cloud servers.
These new Virtex UltraScale+ HBM devices are part of the 3rd generation of Xilinx 3D FPGAs, which started with the Virtex-7 2000T that Xilinx started shipping way, way back in 2011. (See “Generation-jumping 2.5D Xilinx Virtex-7 2000T FPGA delivers 1,954,560 logic cells using 6.8 BILLION transistors (PREVIEW!)”) Xilinx co-developed this 3D IC technology with TSMC and the Virtex UltraScale+ HBM devices represent the current, production-proven state of the art.
Here’s a table listing salient features of these four new Virtex UltraScale+ HBM devices:
Each of these devices incorporates 32 or 64Gbits of HBM DRAM with more than 1000 I/O lines connecting each HBM stack through the silicon interposer to the logic device, which contains a hardened HBM memory controller that manages one or two HBM devices. This memory controller has 32 high-performance AXI channels, allowing high-bandwidth interconnect to the Virtex UltraScale+ devices’ programmable logic and access to many routing channels in the FPGA fabric. Any AXI port can access any physical memory location in the HBM devices.
In addition, these Virtex UltraScale+ HBM FPGAs are the first Xilinx devices to offer the new, high-performance CCIX cache-coherent interface announced just last month. (See “CCIX Consortium develops Release1 of its fully cache-coherent interconnect specification, grows to 22 members.”) CCIX simplifies the design of offload accelerators for hyperscale data centers by providing low-latency, high-bandwidth, fully coherent access to server memory. The specification employs a subset of full coherency protocols and is ISA-agnostic, meaning that the specification’s protocols are independent of the attached processors’ architecture and instruction sets. CCIX pairs well with HBM and the new Xilinx UltraScale+ HBM FPGAs provide both in one package.
Here’s an 8-minute video with additional information about the new Virtex UltraScale+ HBM devices:
Yesterday, Aquantia announced its QuantumStream technology, which drives 100Gbps data over direct-attached copper cable through an SFP connector. Aquantia notes that this is not a 4x25Gbps or 2x50Gbps connection; it’s a true, 1-lane, 100Gbps data stream. The technology is based on a 56Gbps SerDes IP core from GLOBALFOUNDRIES implemented in 14nm FinFET technology. Aquantia has added its own magic in the form of its patented Mixed-Mode Signal Processing (MMSP) and Multi-Core Signal Processing (MCSP) architectural innovations.
The company expects this technology will significantly change interconnectivity within data centers for both inter- and intra-rack connectivity with connections up to a few meters in length. Looks pretty cool for top-of-rack switches.
Full disclosure: You’ll find that Xilinx is listed as a strategic investor on Aquantia’s home page.
Accolade’s 3rd-gen, dual-port, 100G ANIC-200Ku PCIe Lossless Packet Capture Adapter can classify packets in 32 million flows simultaneously—enabled by Xilinx UltraScale FPGAs—while dissipating a mere 50W. The board features two CFP4 optical adapter cages and can time-stamp packets with 4nsec precision. You can directly link two ANIC-200Ku Packet Capture Adapters with a direct-attach cable to handle lossless, aggregated traffic flows at 200Gbps.
Applications for the adapter include:
Accolade’s 3rd-gen, dual-port, 100G ANIC-200Ku PCIe Lossless Packet Capture Adapter
Mellanox has packed IPsec security protocols into its 40GbE ConnectX-4 Lx EN Network Controller by implementing them in the controller’s on-board Xilinx Kintex UltraScale FPGA, thus creating the new Innova IPsec 4 Lx EN Ethernet Adapter Card. The adapter supports multiple encryption and security protocols (AES-GCM and AES-CBC using 128/256-bit keys; SHA-1, SHA-2 with HMAC authentication) and performs the encryption/decryption operations independently from the server’s CPU, thus increasing both performance and security. The on-board Kintex UltraScale FPGA serves as a “bump-on-the-wire” programmable processing offload engine. According to Mellanox, “This approach results in lower latency and additional savings of CPU resources compared to other IPsec protocol implementations, whether through software or alternative accelerators.” Mellanox supports its Innova IPsec adapter with an OFED driver suite as well as an Open Source IPsec stack.
Mellanox Innova IPsec 4 Lx EN 40GbE Adapter Card based on Kintex UltraScale FPGA
This new Mellanox product is an excellent example that illustrates the kind of server CPU offloading made possible when you introduce high-performance programmable hardware into the networking mix in SDN, NFV, and cloud applications.
For more information about the Mellanox Innova IPsec 4 Lx EN Adapter Card, see this Mellanox press release. For more information on the previously announced Mellanox ConnectX-4 Lx EN Network Controller, see “Mellanox ConnectX-4 Lx EN adds local application programming to 10/40GbE NIC by pairing Ethernet ASIC and FPGA.”
Network World reports that the IEEE has ratified IEEE P802.3bz, the standard that defines 2.5GBASE-T and 5GBASE-T Ethernet. That’s a very big deal because these new standards can increase existing 1000BASE-T network line speeds by 5x without the need to upgrade in-place cabling. The new standard’s development is being supported through the collaborative efforts of the Ethernet Alliance and NBASE-T Alliance. (See “Ethernet and NBASE-T Alliances to Host Joint 2.5/5Gbps Plugfest in October.”) Xilinx is a founding member of the NBASE-T Alliance.
For more information about NBASE-T and the NBASE-T alliance, see “NBASE-T aims to boost data center bandwidth and throughput by 5x with existing Cat 5e/6 cable infrastructure” and “12 more companies join NBASE-T alliance for 2.5 and 5Gbps Ethernet standards.”
For additional information about the PHY technology behind NBASE-T, see “Boost data center bandwidth by 5x over Cat 5e and 6 cabling. Ask your doctor if Aquantia’s AQrate is right for you” and “Teeny, tiny, 2nd-generation 1- and 4-port PHYs do 5 and 2.5GBASE-T Ethernet over 100m (and why that’s important).”
The video in this post on the Lightreading.com Web site shows Napatech’s Dan Joe Barry discussing the acceleration provided by his company’s NFV NIC. Briefly, the Napatech NFV NIC reduces CPU loading for NFV applications by a factor of more than 7x relative to conventional Ethernet NICs. That allows one CPU can do the work of nearly eight CPUs, resulting in far lower power consumption for the NFV functions. In addition, the Napatech NFV NIC bumps NIC throughput in NFV applications from the 8Mpackets/sec attainable with conventional Ethernet NICs to the full theoretical throughput of 60Mpackets/sec. The dual-port NFV NIC is designed to support multiple data rates including 8x1Gbps, 4x10Gbps, 8x25 Gbps, 2x40 Gbps, 2x50 Gbps and 2x100 Gbps. All that’s required to upgrade the data rate is downloading a new FPGA image with the correct data rate to the NFV NIC. This allows the same NIC to be used in multiple locations in the network, reducing the variety of products and easing maintenance and operations.
These are substantial benefits in an application where performance/watt is really critical. Further, the Napatech NFV NIC can “extend the lifetime of the NFV NIC and server hardware by allowing capacity, features and capabilities to be extended in line with data growth and new industry solution standards and demands.” The NFV functions implemented by the Napatech NFV NIC can be altered on the fly. Bottom line: the Napatech NFV NIC improves data-center performance and can actually help data-center operators postpone forklift upgrades, which saves even more money and reduces TCO (total cost of ownership).
Napatech NFV NIC
A quick look at the data sheet for the Napatech NFV NIC on the Napatech Web site confirmed my suspicions about where a lot of this goodness comes from: the card is based on a Xilinx UltraSCALE FPGA and “can be programmed and re-configured on-the-fly to support specific acceleration functionality. Specific acceleration solutions are delivered as FPGA images that can be downloaded to the NFV NIC to support the given application.”
Oh, by the way, the Napatech Web site says that 40x performance improvements are possible.
Xilinx has joined the non-profit Open Networking Lab (ON.Lab) as a collaborating member of the CORD Project—Central Office Re-architected as a Datacenter, “which combines NFV, SDN, and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office”—along with a rather long list of major telecom CORD partners including:
CORD aims to produce reference implementations for the industry built using commodity servers, white-box switches, disaggregated access technologies (including vOLT, vBBU, vDOCSIS), and open-source software (including OpenStack, ONOS, XOS) for the residential, enterprise, and mobile markets (R-CORD, E-CORD, and M-CORD).
Xilinx joined with the intent of becoming actively engaged in the CORD Project and has contributed a proposal for FPGA-based Acceleration-as-a-Service for cloud servers and virtualized RAN servers in the M-CORD activity focused on the mobile market. The CORD Technical Steering Team has already reviewed and approved this proposal.
The Xilinx proposal for FPGA-based Acceleration-as-a-Service is based on the company’s UltraScale and UltraScale+ All Programmable devices used, for example, to implement flexible SmartNICs (network interface cards) and employs the partial-reconfiguration capabilities of these devices to allow SDN and NFV operating systems to discover and dynamically allocate FPGA resources to accelerate various functions and services on demand. This proposal will allow SDN and NFV equipment to exploit the superior performance/watt capabilities of hardware-programmable devices in myriad application-processing scenarios.
The VITA49 Radio Transport standard defines digitized data formats and metadata formats to create an interoperability framework for SDRs (software-defined radios) from different manufacturers. Epiq Solutions’ 4-channel Quadratiq RF receiver supports a unidirectional VITA49 UDP data stream with its four receiver paths and dual 10GbE interface ports.
Epiq Solutions 4-channel Quadratiq VITA49 RF receiver
The Quadratiq receiver is based on a Xilinx Zynq Z-7030 SoC (or an optional Zynq Z-7045 SoC). Here’s a block diagram:
As you can see from the block diagram, the digital part of the Quadratiq’s design fits entirely into the Zynq SoC, with companion RAM and an SD card to store processor code and FPGA configuration. The Zynq SoC provides the processors, implements the proprietary digital IP, and implements the system’s digital I/O. This sort of system design is increasingly common when using the Zynq SoC in embedded applications like the Quadratiq RF receiver. Add an RF card, a precise clock, and a power supply and you’re good to go. The entire system consumes a mere 18W.
There are all sorts of really remote applications needing direct satellite communications including maritime comms, SCADA systems, UAVs, M2M, and IoT. The AHA Products Group in Moscow, Idaho previewed its tiny CM1 compact SatCom modem yesterday at the GNU Radio Conference in Boulder, Colorado. How tiny? It measures 55x100mm and here’s a photo of the board with a US 25-cent piece for size comparison:
AHA’s CM1 DVB-S2X SatCom Modem based on a Xilinx Zynq SoC
In case you’ve not heard of them (I hadn’t), AHA Products Group develops and IP Cores, boards, and ASICs specifically for communications systems applications. AHA’s specialties are FEC (forward error correction) and lossless data compression. The company had developed this DVB-S2X modem IP for a specific customer and hosted its IP on an Ettus Research USRP X310 SDR (software-defined radio), which is based on a Xilinx Kintex-7 410T FPGA. The next obvious step was to reduce the cost of the modem and its size, weight, and power consumption for volume-production applications by designing a purpose-built board. AHA was able to take the developed IP and drop it into an appropriate Xilinx Zynq-7000 SoC, which soaked up the IP and provides the Gigabit Ethernet and USB ports as well. The unmarked device in the middle of the board in the above photo is a Zynq SoC.
The AHA CM1 board clearly illustrates how well the Zynq SoC family suits high-performance embedded-processing applications. Add some DRAM and EPROM and you’ve got a compact embedded system with high-performance ARM Cortex-A9 MPCore processors and programmable logic that delivers processing speeds not attainable with software-driven processors alone. In this case, AHA needs that programmable logic to implement the 200Mbits/sec modem IP.
The CM1 SatCom modem board is in development and AHA plans to introduce it early in 2017.
Not to be outdone by DARPA (see “DARPA wants you to win $2M in its Grand Spectrum Collaboration Challenge. Yes, lots of FPGAs are involved”), Matt Ettus—founder of Ettus Research and now a Distinguished Engineer at National Instruments, which purchased Ettus Research and now runs it as a separate division—announced a contest of his own at yesterday’s GNU Radio Conference in Boulder, CO. Ettus Research has developed a product called RFNoC (RF Network on Chip), which “is designed to allow you to efficiently harness the full power of the latest generations of FPGAs [for software-defined radio (SDR) applications] without being an expert firmware developer.” Already popular in the SDR community, the GUI-based RFNoC design tool allows you to “create FPGA applications as easily as you can create GNU Radio flowgraphs.” This includes the ability to seamlessly transfer data between your host PC and an FPGA. It dramatically eases the task of FPGA off-loading in SDR applications.
Here is an example of an RFNoC flowgraph built using the GNU Radio Companion. With four blocks, data is being generated on the host, off-loaded to the FPGA for filtering, and then brought back to the host for plotting:
Ettus’ challenge is called the “RFNoC & Vivado HLS Challenge.” How did Xilinx Vivado HLS get into the contest title? You can use Vivado HLS to develop function blocks for Ettus’ RFNoC in C, C++, or SystemC because Ettus Research bases its USRP SDR products on Xilinx All Programmable devices. At Tuesday’s morning session of the GNU Radio conference, Matt Ettus said that he’s using Vivado HLS himself and considers it a very powerful tool for developing SDR function blocks. He sees this new competition as a great way to rapidly add functional blocks to the RFNoC function library. I’d say he’s right, on both counts.
Here’s how the RFNoC & Vivado HLS Challenge works:
“The competition will take place during the proceedings of the 2017 Virginia Tech Symposium. On the day of the competition, accepted teams will give a presentation and show a demo to a panel of judges made up of representatives from Ettus Research and Xilinx. All teams will be required to send at least one representative to the competition for the presentation. The winners will be announced during the symposium on the conclusion of judging.”
Here are the prizes:
Although the prizes for this competition are somewhat more modest than the $2M first prize in the DARPA competition, the bar’s a whole lot lower.
Contest proposals are due December 31, 2016. More details here.
On the first technical day of the GNU Radio Conference in the Glenn Miller Ballroom on the CU Boulder Campus, DARPA Program Manager Paul Tilghman laid out the latest of the DARPA Grand Challenges: the Spectrum Collaboration Challenge (SC2). DARPA’s SC2 is “an open competition to develop radio networks that can thrive in the existing spectrum without allocations and learn how to adapt across multiple degrees of freedom, collaboratively optimizing the total spectrum capacity moment-to-moment.” DARPA is dangling a $2M top prize ($1M for the runner-up, $750K for third place) to the team that does of best job of meeting this challenge over the next three years. You have about six weeks to sign up for Phase 1 of this DARPA competition.
DARPA created SC2 to dig us (that’s the collective, worldwide “us”) out of the deep, deep, deep radio spectrum hole we’ve been digging for more than 100 years. For the past century, the demand for radio spectrum has grown monotonically and in the past few years, it’s grown at 50% per year.
When Marconi invented the spark-gap transmitter in 1899, one transmitter consumed the entire radio spectrum. That proved to be a problem as soon as there was more than one radio transmitter in the world, so we started using frequency selection to share spectrum the following year. Today, said Tilghman, we’re in the “era of isolation.” We’ve allocated frequencies by use and geography, whether or not that frequency is being used at the moment in that location and we currently use simple rules or blind sharing to share some of the available spectrum. Here’s what the allocated RF spectrum map looks like today:
This century-old solution to RF spectrum management is no longer adequate. By the year 2030, said Tilghman, the radio spectrum will need to carry a zetabyte of data every month. Things cannot continue as they have. We cannot manufacture more spectrum, that’s an inherent property of the space/time fabric, so we must get smarter at using the spectrum we have.
That’s the Grand Challenge.
DARPA seeks to open a new era of RF collaboration through SC2, which seeks to develop autonomous, intelligent, collaborative radio networks built on the following five elements:
Only the first element is a hardware component.
The RF networking design that best carries data and shares the spectrum in this 3-year challenge will win a $2M prize from DARPA. Second place wins $1M and third place wins $750,000.
The rules of the SC2 competition are quite interesting. DARPA want to further development of autonomous, collaborative RF networking systems and has standardized the hardware for this challenge using existing SDR (software-defined radio) designs and equipment. DARPA is building a physical “arena” for the competition using FPGA-based hardware from National Instruments (NI) and Ettus Research (an NI subsidiary). The arena resides in a virtual “colosseum” [sic], the world’s largest RF networking testbed. Here’s a diagram:
Sixteen interconnected NI ATCA-3671 FPGA modules create the core colosseum network. Each ATCA-3671 card incorporates four Xilinx Virtex-7 690T FPGAs in a cross-linked configuration resulting in an aggregate data bandwidth of 160Gbytes/sec in and out of each card. These cards create an FPGA-based mesh that permits efficient data movement and aggregation.
Attached to each ATCA-3671 card are eight Ettus Research USRP X310 high-performance SDR modules, which are based on Xilinx Kintex-7 410T FPGAs. The aggregate is 128 2x2 MIMO radio nodes attached to the colosseum network. That’s a 256x256 MIMO network if you’re playing at home.
The DARPA colosseum provides access control, a scheduling infrastructure, a scoring infrastructure, and automated match initialization. This network can be subdivided and during competition, five teams will compete simultaneously on this shared system. In addition, DARPA has slated the colosseum with incumbent RF networks that must be protected from any additional interference and jammer nodes that intentionally disrupt spectrum. The winning team will be the one that creates a network that best carries traffic while cooperating with the other competing networks, protects incumbent systems, and overcomes the jamming.
The deadline for initial registration in Phase 1 (the first year) of the 3-year SC2 Grand Challenge is November 1, 2016. That gives you a matter of weeks to assemble your team. Competition and registration details are here.
On Wednesday, September 21 at ECOC 2016 in Düsseldorf, Germany—ECOC (European Conference and Exhibition on Optical Communication) is Europe’s largest conference on optical communication—Xilinx Wired Solutions Architect Faisal Dada will give a presentation titled “Dis-integration of the Data Center Interconnect (DCI) box.” In his talk, Faisal will make the counter-intuitive case that keeping DSP and client logic separate within a DCI box actually maximizes a network engineer’s options and flexibility. He cites pluggable (as opposed to integrated) networking optics as an example where industry norms have already selected modularity over increased integration to maximize networking flexibility and to future-proof the equipment.
Here are the specifics for Faisal’s talk at ECOC:
Location: Market Focus Theater in the ECOC Exhibition Hall
Time and Date: 13:15 to 13:45, Wednesdy, September 21
“In the future everything will be attacked. How does one protect a device with limited resources against hardware, firmware, configuration and data hacks during the whole life cycle?”
PFP’s cybersecurity technology takes fine-grained measurements of a processor’s power consumption and detects anomalies using base references from trusted software sources, machine learning, and data analytics. Because it only monitors power consumption, it’s impossible for intruders to detect its presence.
The original demo of this technology appeared in the Xilinx booth running on a platform based on a Xilinx Zynq SoC at last year’s ARM TechCon. (See “Zynq-based PFP eMonitor brings power-based security monitoring to embedded systems.”)
Here’s a 1-minute video with a very brief overview of PFP’s technology:
This week’s announcement by PFP and Wistron makes PFP’s cybersecurity technology available to Wistron’s customers. Wistron is an ODM (original device manufacturer); it was originally Acer’s manufacturing arm in Taiwan but has been an independent company since the year 2000 with operations in Asia, Europe, and North America. The company currently develops and manufactures a range of electronic products including notebook PCs, desktop PCs, servers and storage systems, LCD TVs, information appliances, handheld devices, networking and communication products, and IoT devices for a variety of clients. Wistron's customers can outsource some or all of their product development tasks and this week’s announcement allows Wistron to incorporate PFP’s cybersecurity technology into new designs.
The Ethernet Alliance and NBASE-T Alliance have announced a collaborative effort to accelerate mainstream deployment of 2.5GBASE-T and 5GBASE-T Ethernet, which leverages the last 13 years of infrastructure construction based on Cat5e and Cat6 cabling (some 70 billion meters of cable!). The two organizations plan to validate multi-vendor interoperability at a plugfest scheduled for the week of October 10, 2016 at the University of New Hampshire InterOperability Laboratory (UNH-IOL) in Durham, NH. For more information on the Ethernet Alliance/NBASE-T Alliance plugfest, please contact firstname.lastname@example.org or email@example.com.
Note: Xilinx is a founding member of the NBASE-T Alliance. (See “NBASE-T aims to boost data center bandwidth and throughput by 5x with existing Cat 5e/6 cable infrastructure” and “12 more companies join NBASE-T alliance for 2.5 and 5Gbps Ethernet standards.”
For additional information about the PHY technology behind NBASE-T, see “Boost data center bandwidth by 5x over Cat 5e and 6 cabling. Ask your doctor if Aquantia’s AQrate is right for you” and “Teeny, tiny, 2nd-generation 1- and 4-port PHYs do 5 and 2.5GBASE-T Ethernet over 100m (and why that’s important).”)
VadaTech just announced three new PCIe carrier cards for FMC, which means that each of the PCIe boards has a VITA-57 FMC connector on it but they also have some pretty capable Xilinx FPGAs on board as well. The three new cards are:
Today’s press release say that these cards are “ideal for bringing COTS PCIe systems up to date with the latest FPGAs,” and they are. They’re also good for ASIC prototyping/emulation and for building 100G networking gear; the series of three PCIe carrier cards gives you a family of products to use as a broad foundation for a range of end products.
VadaTech PCI595-- PCIe FPGA Carrier for FMC, Virtex UltraScale VU440 FPGA