UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

 

Exactly a week ago, Xilinx introduced the Zynq UltraScale+ RFSoC family, which is a new series of Zynq UltraScale+ MPSoCs with RF ADCs and DACs and SD-FECs added. (See “Zynq UltraScale+ RFSoC: All the processing power of 64- and 32-bit ARM cores, programmable logic plus RF ADCs, DACs.”) This past Friday at the Xilinx Showcase held in Longmont, Colorado, Senior Marketing Engineer Lee Hansen demonstrated a Zynq UltraScale+ ZU28DR RFSoC with eight 12-bit, 4Gsamples/sec RF ADCs, eight 14-bit, 6.4Gsamples/sec RF DACs, and eight SD-FECs connected through an appropriate interface to National Instruments’ LabVIEW Systems Engineering Development Environment.

 

The demo system was generating signals using the RF DACs, receiving the signals using the RF ADCs, and then displaying the resulting signal spectrum using LabVIEW.

 

Here’s a 3-minute video of the demo:

 

 

 

 

 

LightReading has just posted a 5-minute video interview with Kirk Saban (Xilinx’s Senior Director for FPGA and SoC Product Management and Marketing) discussing some of the aspects of the newly announced Zynq UltraScale+ RFSoCs with on-chip RF ADCs and DACs. These devices are going to revolutionize the design of all sorts of high-end equipment that must deal with high-speed analog signals in markets as diverse as 5G communications, broadband cable, test and measurement, and aerospace/defense.

 

Here’s the video:

 

 

 

 

 

 

For more information about the new Zynq UltraScale+ RFSoC family, see “Zynq UltraScale+ RFSoC: All the processing power of 64- and 32-bit ARM cores, programmable logic plus RF ADCs, DACs.”

 

 

 

Earlier today, Xilinx formally announced delivery of the first Zynq UltraScale+ RFSoCs. (See “Zynq UltraScale+ RFSoC: All the processing power of 64- and 32-bit ARM cores, programmable logic plus RF ADCs, DACs.”) Now there’s a specification-packed video showing one of these devices in action. It’s only four minutes long, but you’ll probably need to view it at least twice to unpack everything you’ll see in terms of RF processing, ADC and DAC performance. Better strap yourself in for the ride.

 

 

 

 

For more information about the Zynq UltraScale+ RFSoC device family, please contact your friendly neighborhood Xilinx sales representative or Avnet sales representative.

 

 

You say you want a revolution

Well, you know

We all want to change the world

 

--“Revolution” written by John Lennon and Paul McCartney

 

 

Today, Xilinx announced five new Zynq UltraScale+ RFSoC devices with all of the things you expect in a Xilinx Zynq UltraScale+ SoC—a 4-core APU with 64-bit ARM Cortex A-53 processor cores, a 2-core RPU with two 32-bit ARM Cortex-R5 processors, and ultra-fast UltraScale+ programmable logic—with revolutionary new additions: 12-bit, RF-class ADCs, 14-bit, RF-class DACs, and integrated SD-FEC (Soft Decision Forward Error Correction) cores.

 

Just in case you missed the plurals in that last sentence, it’s not one but multiple RF ADCs and DACs.

 

That means you can bring RF analog signals directly into these chips, process those signals using high-speed programmable logic along with thousands of DSP48E2 slices, and then output processed RF analog signals—using the same device to do everything. In addition, if you’re decoding and/or encoding data, two of the announced Zynq UltraScale+ RFSoC family members incorporate SD-FEC IP cores that support LDPC coding/decoding and Turbo decoding for applications including 5G wireless communications, backhaul, DOCSIS, and LTE. If you’re dealing with RF signals and high-speed communications, you know just how revolutionary these parts are.

 

For everyone else not accustomed to handling RF analog signals… well you’ll just have to take my word for it. These devices are revolutionary.

 

Here’s a conceptual block diagram of a Zynq UltraScale+ RFSoC:

 

 

 

RFSoC Block Diagram.jpg 

 

 

Here’s a table that shows you some of the resources available on the new Zynq UltraScale+ RFSoC devices:

 

 

RFSoC Product Table.jpg 

 

 

As you can see from the table, you can get as many as eight 12-bit, 4Gsamples/sec ADCs or sixteen 12-bit, 2Gsamples/sec ADCs on one device. You can also get eight or sixteen 14-bit, 6.4Gsamples/sec DACs on the same device. Two of the Zynq UltraScale+ RFSoCs also incorporate eight SD-FECs. In addition, there are plenty of logic cells, DSP slices, and RAM on these devices to build just about anything you can imagine. (With my instrumentation background, I can imagine new classes of DSOs and VSTs (Vector Signal Transceivers), for example.)

 

You get numerous benefits by basing your design on a Zynq UltraScale+ RFSoC device. The first and most obvious is real estate. Putting the ADCs, DACs, processors, programmable logic, DSPs, memory, and programmable I/O on one device saves a tremendous amount of board space and means you won’t be running high-speed traces across the pcb to hook all these blocks together.

 

Next, you save the complexity of dealing with high-speed converter interfaces like JESD204B/C. The analog converters are already interfaced to the processors and logic inside of the device. Done. Debugged. Finished.

 

You also save the power associated with those high-speed interfaces. That alone can amount to several watts of power savings. These benefits are reviewed in a new White Paper titled “All Programmable RF-Sampling Solutions.

 

There’s a device family overview here.

 

And just one more thing. Today’s announcement didn’t just announce the Zynq UltraScale+ RFSoC device family. The announcement headline included one more, very important word: “Delivers.”

 

 

As in shipped.

 

 

Because Xilinx doesn’t just announce parts. We deliver them too.

 

 

For more information about the Zynq UltraScale+ RFSoC device family, please contact your friendly neighborhood Xilinx sales representative or Avnet sales representative.

 

 

 

Need to get an FPGA-based, high-performance network server or appliance designed and fielded quickly? If so, take a serious look at the heavyweight combination of a BittWare XUPP3R PCIe board based on any one of three Xilinx Virtex UltraScale+ FPGAs (VU7P, VU9P, or VU11P) and LDA Technologies’ slick 1U e4 FPGA chassis, which is designed to bring all of the BittWare XUPP3R’s I/O ports on its QSFP, PCIe, and high-speed serial expansion ports to the 48-port front panel, like so:

 

 

 

LDA Technologies e4 FPGA Chassis IO Detail.jpg 

 

 

LDA Technologies’ 1U e4 FPGA chassis

 

 

 

The reason that BittWare’s XUPP3R PCIe card can support this many high-speed GbE ports is because the Virtex UltraScale+ FPGAs have that many bulletproof high-speed SerDes ports.

 

 

 

Bittware XUPP3R FPGA Board v2.jpg 

 

 

BittWare XUPP3R PCIe board based on Virtex UltraScale+ FPGAs

 

 

 

You can use the combination of the BittWare XUPP3R PCIe card and LDA Technologies’ e4 FPGA chassis to develop a variety of network equipment including:

 

  • SDN switches
  • VPN concentrators
  • Routers
  • Firewalls
  • Intrusion-detection systems

 

 

LDA Technologies’ e4 FPGA chassis is specifically designed to accept a PCIe FPGA card like BittWare’s XUPP3R and it has several features designed to specifically support the needs of such a card including:

 

  • Dual-redundant power supply with load balancing
  • On-board Ethernet switch permits simple port-to-port switching without FPGA intervention
  • On-board COM Express socket that accepts Type 6 and Type 10 processor cards
  • Locally stored FPGA configuration with 1-second boot time
  • Precision 100MHz OCXO (oven-controlled crystal oscillator) for applications that require precise timing

 

 

The e4 FPGA chassis fits in a 1U rack space and it’s only 12 inches deep, so you can mount two back-to-back in a 1U rack slot with the front I/O ports pointing forward and the back I/O ports pointing out the rear of the rack.

 

For a 10-minute BittWare video that gives you even more information about this card/chassis combo, click here.

 

 

 

 

 

 

 

 

This week, EXFO announced and demonstrated its FTBx-88400NGE Power Blazer 400G Ethernet Tester at the ECOC 2017 optical communications conference in Gothenburg, Sweden using a Xilinx VCU140 FPGA design platform as an interoperability target. The VCU140 development platform is based on a Xilinx Virtex UltraScale+ VU9P FPGA. EXFO’s FTBx-88400NGE Power Blazer offers advanced testing for the full suite of new 400G technologies including support for FlexE (Flex Ethernet), 400G Ethernet, and high-speed transceiver validation. The Flex Ethernet (FlexE) function supports one or more bonded 100GBASE-R PHYs supporting multiple Ethernet MAC operating at a rate of 10, 40, or n x 25Gbps. Flex Ethernet is a key data center technology that helps data centers deliver links that are faster than emerging 400G solutions.

 

Here’s a photo of the ECOC 2017 demo:

 

 

 

EXFO FTBx-88400NGE Power Blazer Demo.jpg 

 

 

 

This demonstration is yet one more proof point for the 400GbE standard, which will be used in a variety of high-speed communications applications including data-center interconnect, next-generation switch and router line cards, and high-end OTN transponders.

 

 

 

Every device family in the Xilinx UltraScale+ family of devices (Virtex UltraScale+ FPGAs, Kintex UltraScale+ FPGAs, and Zynq UltraScale+ MPSoCs) have members with 28Gbps-capable GTY transceivers. That’s likely to be important to you as the number and forms of small, 28Gbps interconnect grow. You have many such choices in such interconnect these days including:

 

 

  • QSFP28 Optical
  • QSFP28 Direct-Attach Copper
  • SFP28 Optical
  • SFP28 Direct-Attach Copper
  • Samtec FireFly AOC (Active Optical Cable or Twinax ribbon cable)

 

 

The following 5.5-minute video demonstrates all of these interfaces operating with 25.78Gbps lanes on Xilinx VCU118 and KCU116 Eval Kits, as concisely explained (as usual) by Xilinx’s “Transceiver Marketing Guy” Martin Gilpatric. Martin also discusses some of the design challenges associated with these high-speed interfaces.

 

But first, as a teaser, I could not resist showing you the wide-open IBERT eye on the 25.78Gbps Samtec FireFly AOC:

 

 

 

Kintex Ultrascale Firefly AOC IBERT Eye.jpg 

 

 

 

Now that’s a desirable eye.

 

Here’s the new video:

 

 

 

 

 

 

Netcope Technologies’ NFB-100G2Q NIC broke industry records for 100GbE performance earlier this year by achieving 148.8M packets/sec throughput on DPDK (the Data Plane Development Kit) for 64-byte packets—which is 100GbE’s theoretical maximum. That’s good news if you’re deploying NFV applications. Going faster is the name of the game, after all. That performance—tracking the theoretical maximum as defined by line rate and packet size—continues as the frame size gets larger. You can see that from this performance graph from the Netcope Web site:

 

 

Netcope 100GbE DPDK Performance.jpg 

 

 

It’s possible to go even faster, you just need a faster line rate. That’s what the just-announced Netcope NFB-200G2QL 200G Programmable Smart NIC is for: sending packets to your application at 200Gbps over two 100GbE connections. The Netcope NFB-100G2Q NIC is based on a Xilinx Virtex-7 580T FPGA. The NFB-200G2QL Smart NIC (with NACA/NASA-style air scoop) is based on a Xilinx Virtex UltraScale+ FPGA.

 

 

 

Netcope NFB-200G2QL Programmable NIC.jpg

 

The Netcope NFB-200G2QL 200G Programmable Smart NIC is based on a Xilinx Virtex UltraScale+ FPGA

 

 

 

 

 

NASA Air Scoop.jpg

 

Three NACA/NASA air scoops (photo by Meggar at the English language Wikipedia)

 

 

 

For more information about Netcope’s DPDK performance, see the company’s White Paper titled “Improving DPDK Performance.”

 

 

For more information about the Netcope NFB-100G2 NIC in Xcell Daily, see “Brief demo at SC15 by NetCOPE shows the company’s 100G Ethernet boards in action.”

 

 

Earlier this year, the University of New Hampshire’s InterOperability Laboratory (UNH-IOL) gave a 25G and 50G Plugfest and everybody came to the party to test compatibility of their implementations with each other. The long list of partiers included:

 

 

  • Arista
  • Amphenol
  • Cisco
  • Dell Delta
  • HPE
  • Hitachi
  • Intel
  • Ixia
  • Marvell
  • Mellanox
  • Microsoft
  • Netronome
  • Qlogic
  • Spirent
  • Teledyne-LeCroy
  • Xilinx

 

 

You can find these companies’ names, the equipment they tested, and the speeds they tested on the 25 Gigabit Ethernet Consortium’s Web site’s Integrator’s List. From that site:

 

“The 25 Gigabit Ethernet Consortium is an open organization to all third parties who wish to participate as members to enable the transmission of Ethernet frames at 25 or 50 Gigabit per second (Gbps) and to promote the standardization and improvement of the interfaces for applicable products.”

 

From the Consortium’s press release about the plugfest:

 

“The testing demonstrated a high degree of multi-vendor interoperability and specification conformance.”

 

 

For its part, Xilinx tested its 10/25G High-Speed Ethernet LogiCORE IP and 40/50G High-Speed Ethernet LogiCORE Subsystem IP using the Xilinx VCU108 Eval Kit based on a Virtex UltraScale XCVU095-2FFVA2104E FPGA over copper using different cable lengths. Consortium rules do not permit me to tell you which companies interoperated with each other, but I can say that Xilinx tested against every company on the above list. I’m told that the Xilinx 25G/50G receiver “did well.”

 

 

 

Xilinx VCU108 Eval Kit.jpg 

 

 

Xilinx Virtex UltraScale VCU108 Eval Kit

 

 

 

 

 

 

To paraphrase Douglas Adams’ Hitchhikers Guide to the Galaxy: “400GE is fast. Really fast. You just won't believe how vastly, hugely, mind-bogglingly fast it is.”

 

Xilinx, Microsoft/Azure Networking, and Juniper held a 400GE panel at OFC 2017 that explored the realities of the 400GE ecosystems, deployment models and why the time for 400GE has arrived. The half-hour video below is from OFC 2017. Xilinx’s Mark Gustlin discusses the history of Ethernet from 10Mbps in the 1980s to today’s 400GE, including an explanation lower-speed variants and why they exist. It also provides technical explanations for why the 400GE IEEE technical specs look the way they do and what 400GE optical modules will look like as they evolve. Microsoft/Azure Networking’s Brad Booth describes what he expects Azure’s multi-campus, data-center networking architecture to look like in 2019 and how he expects 400GE to fit into that architecture. Finally, Juniper’s David Ofelt discusses how the 400GE development model has flipped: the hyperscale developers and system vendors are now driving the evolution and the carriers are following their lead. He also touches on the technical issues that have held up 400GE development and what happens when we max out on optical module density (we’re almost there).

 

 

 

 

 

 

For more information about 400GE in Xcell Daily, see:

 

 

 

 

 

 

 

 

 

 

 

 

 

Xilinx announced the addition of the P416 network programming language for SDN applications to its SDNet Development Environment for high-speed (1Gbps to 100Gbps) packet processing back in May. (See “The P4 has landed: SDNet 2017.1 gets P4-to-FPGA compilation capability for 100Gbps data-plane packet processing.”) An OFC 2017 panel session in March—presented by Xilinx, Barefoot Networks, Netcope Technologies, and MoSys—discussed the adoption of P4, the emergent high-level language for packet processing, and early implementations of P4 for FPGA and ASIC targets. Here’s a half-hour video of that panel discussion.

 

 

 

 

For more information, you might also want to take a look at the Xilinx P4-SDNet Translator User Guide and the SDNet Packet Processor User Guide, which was just updated recently.

 

 

 

Metamako decided that it needed more than one Xilinx UltraScale FPGA to deliver the low latency and high performance it wanted from its newest networking platform. The resulting design is a 1RU or 2RU box that houses one, two, or three Kintex UltraScale or Virtex UltraScale+ FPGAs, connected by “near-zero” latency links. The small armada of FPGAs means that the platform can run multiple networking applications in parallel—very quickly. This new networking platform allows Metamako to expand far beyond its traditional market—financial transaction networking—into other realms such as medical imaging, SDR (software-defined radio), industrial control, and telecom. The FPGAs are certainly capable of implementing tasks in all of these applications with extremely high performance.

 

 

Metamako Triple-FPGA Networking Platform.jpg

 

Metamako’s Triple-FPGA Networking Platform

 

 

 

The Metamako platform offers an extensive range of standard networking features including data fan-out, scalable broadcast, connection monitoring, patching, tapping, time-stamping, and a deterministic port-to-FPGA latency of just 3nsec. Metamako also provides a developer’s kit with the platform with features that include:

 

 

  • A Simplified Stack - One device houses the FPGA(s), x86 server, and Layer 1 switch, ensuring that all hardware components work in sync.
  • Integration with existing FPGA Tools – Platform-specific adapters for programming the FPGA(s) are embedded in the Metamako device, allowing for quick and easy (remote) access to the device by the FPGA development tools.
  • Layer 1+ Crosspoint Functionality – Includes all core Metamako Layer 1 functions such as market-scalable broadcast, connection monitoring, remote patching, tapping, and timestamping.
  • SFP Agnostic – Metamako’s Layer 1+ switch is SFP agnostic, which saves FPGA developers time and effort in having to interface with lots of different types of SFPs.
  • Feature Rich – Standard enterprise features include access control, syslog, SNMP, packet stats, tcpdump, JSON RPC API, time series data, and streaming telemetry.
  • Easy Application Deployment - Metamako's platform includes a built-in application infrastructure that allows developers to wrap applications into simple packages for deployment.

 

 

This latest networking platform from Metamako demonstrates a key attribute of Xilinx All Programmable technology: the ability to fully differentiate a product by exploiting the any-to-any connectivity and high-speed processing capabilities of Xilinx silicon using Xilinx’s development tools. No other chip technology could provide Metamako with a comparable mix of extreme connectivity, speed, and design flexibility.

 

 

 

 

 

When someone asks where Xilinx All Programmable devices are used, I find it a hard question to answer because there’s such a very wide range of applications—as demonstrated by the thousands of Xcell Daily blog posts I’ve written over the past several years.

 

Now, there’s a 5-minute “Powered by Xilinx” video with clips from several companies using Xilinx devices for applications including:

 

  • Machine learning for manufacturing
  • Cloud acceleration
  • Autonomous cars, drones, and robots
  • Real-time 4K, UHD, and 8K video and image processing
  • VR and AR
  • High-speed networking by RF, LED-based free-air optics, and fiber
  • Cybersecurity for IIoT

 

That’s a huge range covered in just five minutes.

 

Here’s the video:

 

 

 

 

 

Light Reading’s International Group Editor Ray Le Maistre recently interviewed David Levi, CEO of Ethernity Networks, who discusses the company’s FPGA-based All Programmable ACE-NIC, a Network Interface Controller with 40Gbps throughput. The carrier-grade ACE-NIC accelerates vEPC (virtual Evolved Packet Core, a framework for virtualizing the functions required to converge voice and data on 4G LTE networks) and vCPE (virtual Customer Premise Equipment, a way to deliver routing, firewall security and virtual private network connectivity services using software rather than dedicated hardware) applications by 50x, dramatically reducing end-to-end latency associated with NFV platforms. Ethernity’s ACE-NIC is based on a Xilinx Kintex-7 FPGA.

 

“The world is crazy about our solution—it’s amazing,” says Levi in the Light Reading video interview.

 

 

Ethernity Networks ACE-NIC.jpg

 

Ethernity Networks All Programmable ACE-NIC

 

 

Because Ethernity implements its NIC IP in a Kintex-7 FPGA, it was natural for Le Maistre to ask Levi when his company would migrate to an ASIC. Levi’s answer surprised him:

 

“We offer a game changer... We invested in technology—which is covered by patents—that consumes 80% less logic than competitors. So essentially, a solution that you may want to deliver without our patents will cost five times more on FPGA… With this kind of solution, we succeed over the years in competing with off-the-shelf components… with the all-programmable NIC, operators enjoy the full programmability and flexibility at an affordable price, which is comparable to a rigid, non-programmable ASIC solution.”

 

In other words, Ethernity plans to stay with All Programmable devices for its products. In fact, Ethernity Networks announced last year that it had successfully synthesized its carrier-grade switch/router IP for the Xilinx Zynq UltraScale+ MPSoC and that the throughput performance increases to 60Gbps per IP core with the 16nm device—and 120Gbps with two instances of that core. “We are going to use this solution for novel SDN/NFV market products, including embedded SR-IOV (single-root input/output virtualization), and for high density port solutions,” – said Levi.

 

Towards the end of the video interview, Levi looks even further into the future when he discusses Amazon Web Services’ (AWS’) recent support of FPGA acceleration. (That’s the Amazon EC2 F1 compute instance based on Xilinx Virtex UltraScale+ FPGAs rolled out earlier this year.) Because it’s already based on Xilinx All Programmable devices, Ethernity’s networking IP runs on the Amazon EC2 F1 instance. “It’s an amazing opportunity for the company [Ethernity],” said Levi. (Try doing that in an ASIC.)

 

Here’s the Light Reading video interview:

 

 

 

 

 

 

When discussed in Xcell Daily two years ago, Exablaze’s 48-port ExaLINK Fusion Ultra Low Latency Switch and Application Platform with the company’s FastMUX option was performing fast Ethernet port aggregation on as many as 15 Ethernet ports with blazingly fast 100nsec latency. (See “World’s fastest Layer 2 Ethernet switch achieves 110nsec switching using 20nm Xilinx UltraScale FPGAs.”) With its new FastMUX upgrade, also available free to existing customers with a current support contract as a field-installable firmware upgrade, Exablaze has now cut that number in half, to an industry-leading 49nsec (actually, between 48.79nsec and 58.79nsec). The FastMUX option aggregates 15 server connections into a single upstream port. All 48 ExaLINK Fusion ports including the FastMux ports are cross-point enabled so that they can support layer 1 features such as tapping for logging, patching for failover, and packet counters and signal quality statistics for monitoring.

 

 

 

Exablaze ExaLINK Fusion Switch.jpg 

 

 

 

The ExaLINK Fusion platform is based on a Xilinx 20nm UltraScale FPGA, which initially gave Exablaze the ability to initially create the fast switching and fast aggregation hardware and massive 48-port connectivity and then to improve the product’s design by taking advantage of the FPGA’s reprogrammability, which simply requires a firmware upgrade that can be performed in the field.

 

 

 

 

 

Perhaps you think DPDK (Data Plane Development Kit) is a high-speed data-movement standard that’s strictly for networking applications. Perhaps you think DPDK is an Intel-specific specification. Perhaps you think DPDK is restricted to the world of host CPUs and ASICs. Perhaps you’ve never heard of DPDK—given its history, that’s certainly possible. If any of those statements is correct, keep reading this post.

 

Originally, DPDK was a set of data-plane libraries and NIC (network interface controller) drivers developed by Intel for fast packet processing on Intel x86 microprocessors. That is the DPDK origin story. Last April, DPDK became a Linux Foundation Project. It lives at DPDK.org and is now processor agnostic.

 

DPDK consists of several main libraries that you can use to:

 

  • Send and receive packets while minimizing the number of CPU cycles needed (usually less than 80)
  • Develop fast packet-capture algorithms
  • Run 3rd-party fast-path stacks

 

So far, DPDK certainly sounds like a networking-specific development kit but, as Atomic Rules’ CTO Shep Siegel says, “If you can make your data-movement problem look like a packet-movement problem,” then DPDK might be a helpful shortcut in your development process.

 

Siegel knows more than a bit about DPDK because his company has just released Arkville, a DPDK-aware FPGA/GPP data-mover IP block and DPDK PMD (Poll Mode Driver) that allow Linux DPDK applications to offload server cycles to FPGA gates in tandem with the Linux Foundation’s 17.05 release of the open-source DPDK libraries. Atomic Rules’ Arkville release is compatible with Xilinx Vivado 2017.1 (the latest version of the Vivado Design Suite), which was released in April. Currently, Atomic rules provides two sample designs:

 

 

  • Four-Port, Four-Queue 10 GbE example (Arkville + 4×10 GbE MAC)
  • Single-Port, Single-Queue 100 GbE example (Arkville + 1×100 GbE MAC)

 

(Atomic Rules’ example designs for Arkville were compiled with Vivado 2017.1 as well.)

 

 

These examples are data movers; Arkville is a packet conduit. This conduit presents a DPDK interface on the CPU side and AXI interfaces on the FPGA side. There’s a convenient spot in the Arkville conduit where you can add your own hardware for processing those packets. That’s where the CPU offloading magic happens.

 

Atomic Rules’ Arkville IP works well with all Xilinx UltraScale devices but it works especially well with Xilinx UltraScale+ All Programmable devices that provide two integrated PCIe Gen3 x16 controllers. (That includes devices in the Kintex UltraScale+ and Virtex UltraScale+ FPGA families and the Zynq UltraScale+ MPSoC device families.)

 

Why?

 

Because, as BittWare’s VP of Network Products Craig Lund says, “100G Ethernet is hard. It’s not clear that you can use PCIe to get [that bit rate] into a server [using one PCIe Gen3 x16 interface]. From the PCIe specs, it looks like it should be easy, but it isn’t.” If you are handling minimum-size packets, says Lund, there are lots of them—more than 14 million per second. If you’re handling big packets, then you need a lot of bandwidth. Either use case presents a throughput challenge to a single PCIe Root Complex. In practice, you really need two.

 

BittWare has implemented products using the Atomic Rules Arkville IP, based on its XUPP3R PCIe card, which incorporates a Xilinx Virtex UltraScale+ VU13P FPGA. One of the many unique features of this BittWare board is that it has two PCIe Gen3 x16 ports: one available on an edge connector and the other available on an optional serial expansion port. This second PCIe Gen3 x16 port can be connected to a second PCIe slot for added bandwidth.

 

However, even that’s not enough says Lund. You don’t just need two PCIe Gen3 x16 slots; you need two PCIe Gen2 Root Complexes and that means you need a 2-socket motherboard with two physical CPUs to handle the traffic. Here’s a simplified block diagram that illustrates Lund’s point:

 

 

BittWare XUPP3R PCIe Card with two processors.jpg 

 

 

BittWare’s XUPP3R PCIe Card has two PCIe Gen3 x16 ports: one on an edge connector and the other on an optional serial expansion port for added bandwidth

 

 

 

BittWare has used its XUPP3R PCIe card and the Arkville IP to develop two additional products:

 

 

 

Note: For more information about Atomic Rules’ IP and BittWare’s XUPP3R PCIe card, see “BittWare’s UltraScale+ XUPP3R board and Atomic Rules IP run Intel’s DPDK over PCIe Gen3 x16 @ 150Gbps.”

 

 

Arkville is a product offered by Atomic Rules. The XUPP3R PCIe card is a product offered by BittWare. Please contact these vendors directly for more information about these products.

 

 

 

 

Xenie programmable 10GBASE-T FPGA modules bridge copper, optical Ethernet, and 12G SDI using Kintex-7 FPGA

by Xilinx Employee ‎05-19-2017 11:15 AM - edited ‎05-19-2017 11:20 AM (15,672 Views)

 

DFC Design’s Xenie FPGA module product family pairs a Xilinx Kintex-7 FPGA (a 70T or a 160T) with a Marvell Alaska X 88X3310P 10GBASE-T PHY on a small board. The module breaks out six of the Kintex-7 FPGA’s 12.5Gbps GTX transceivers and three full FPGA I/O banks (for a total of 150 single-ended I/O or up to 72 differential pairs) with configurable I/O voltage to two high-speed, high-pin-count, board-to-board connectors. A companion Xenie BB Carrier board accepts the Xenie FPGA board and breaks out the high-speed GTX transceivers into a 10GBASE-T RJ45 connector, an SFP+ optical cage, and four SDI connectors (two inputs and two outputs).

 

Here’s a block diagram and photo of the Xenia FPGA module:

 

 

 

 

DFC Design Xenia FPGA Module.jpg 

 

 

Xenia FPGA module based on a Xilinx Kintex-7 FPGA

 

 

 

And here’s a photo of the Xenie BB Carrier board that accepts the Xenia FPGA module:

 

 

 

DFC Design Xenia BB Carrier Board.jpg 

 

Xenia BB Carrier board

 

 

These are open-source designs.

 

DFC Design has developed a UDP core for this design, available on OpenCores.org and has published two design examples: an Ethernet example and a high-speed camera design.

 

Here’s a block diagram of the Ethernet example:

 

 

 

 

DFC Design Ethernet Example System.jpg 

 

 

Please contact DFC Design directly for more information.

 

 

 

After telegraphing its intent for more than a year, Xilinx has now added the P416 language to its SDNet Development Environment for high-speed (1Gbps to 100Gbps) packet processing. SDNet release 2017.1 includes a generally accessible, front-end P4-to-SDNet translator. P416 is the latest version of the P4 language and the SDNet workflow compiles packet-processing descriptions into data-plane switching algorithms instantiated in high-speed Xilinx FPGAs. Xilinx debuted the new SDNet release at this week’s P4 Developer Day and P4 Workshop held at Stanford U. in Palo Alto, CA. (There was a beta version of the translator in the prior SDNet 2016.4 release.)

 

There’s information about the new Xilinx P4-toSDNet translator in the latest version of the SDNet Packet Processor User Guide (UG1012) and the P4-SDNet Translator User Guide (UG1252). If you’re up on recent developments with the P416 language, you might want to jump to these user guides directly. Otherwise, you might want to take a look at this Linley Group White Paper titled “Xilinx SDNet: A New Way to Specify Network Hardware”, written by Senior Analyst Loring Wirbel, or watch this short video first:

 

 

 

 

 

And if you have a couple of hours to devote to learning a lot more about the P4 language, try this video from the P4 Language Consortium, which includes presentations from Vladimir Gurevich from Barefoot Networks, Ben Pfaff from VMware, Johann Tonsing from Netronome, and Gordon Brebner from Xilinx:

 

 

 

 

 

 

 

The 1-minute video appearing below shows two 56Gbps, PAM-4 demos from the recent OFC 2017 conference. The first demo shows a CEI-56G-MR (medium-reach, 50cm, chip-to-chip and low-loss backplane) connection between a Xilinx 56Gbps PAM-4 test chip communicating through a QSFP module over a cable to a Credo device. A second PAM-4 demo using CEI-56G-LR (long-reach, 100cm, backplane-style) interconnect shows a Xilinx 56Gbps PAM-4 test chip communicating over a Molex backplane to a Credo device, which is then communicating with a GlobalFoundries device over an FCI backplane, which is then communicating over a TE backplane back to the Xilinx device. This second demo illustrates the growing, multi-company ecosystem supporting PAM-4.

 

 

 

 

For more information about the Xilinx PAM-4 test chip, see “3 Eyes are Better than One for 56Gbps PAM4 Communications: Xilinx silicon goes 56Gbps for future Ethernet,” and “Got 90 seconds to see a 56Gbps demo with an instant 2x upgrade from 28G to 56G backplane? Good!

 

 

Looking for a relatively painless overview of the current state of the art for high-speed Ethernet used in data centers and for telecom? You should take a look at this just-posted, 30-minute video of a panel discussion at OFC2017 titled “400GE from Hype to Reality.” The panel members included:

 

  • Mark Gustlin, Principal System Architect at Xilinx (the moderator)
  • Brad Booth, Microsoft Azure Networking
  • David Ofelt, Juniper Networks

 

Gustlin starts by discussing the history of 400GbE’s development, starting with a study group organized in 2013. Today, the 400GbE spec is at draft 3.1 and the plan is to produce a final standard by December 2017.

 

Booth answers a very simple question in his talk: “”Yes, we ill” use 400GbE in the data center. He then proceeds to give a fairly detailed description of the data centers and networking used to create Microsoft’s Azure cloud-computing platform.

 

Ofelt describes the genesis of the 400GbE standard. Prior to 400G, says Ofelt, system vendors worked with end users (primarily telecom companies) to develop faster Ethernet standards. Once a standard appeared, ther would be a deployment ramp. Although 400GbE development started that way, the people building hyperscale data centers sort of took over and they want to deploy 400GbE at scale, ASAP.

 

Don’t be fooled by the title of this panel. There’s plenty of discussion about 25GbE through 100GbE and 200GbE as well, so if you’re needing a quick update on high-speed Ethernet’s status, this 30-minute video is for you.

 

 

 

 

 

 

Samtec recorded a demo of its FireFly FQSFP twinax cable assembly carrying four 28Gbps lanes from a Xilinx Virtex UltraScale+ VU9P FPGA on a VCU118 eval board to a QSFP optical cage at the recent OFC 2017 conference in Los Angeles. (The Virtex UltraScale+ VU9P FPGA has 120 GTY transceivers capable of 32.75Gbps operation and the VCU118 eval kit includes the Samtec FireFly daughtercard with cable assembly.) Samtec’s FQSFP assembly plugs mid-board into a FireFly connector on the VCU118 board. The 28Gbps signals then “fly over” the board through to the QSFP cage and loop back over the same path, where they are received back into the FPGA. The demonstration shows 28Gbps performance on all four links with zero bit errors.

 

As explained in the video, the advantage to using the Samtec FireFly flyover system is that it takes the high-speed 28Gbps signals out of the pcb-design equation, making the pcb easier to design and less expensive to manufacture. Significant savings in pcb manufacturing cost can result for large board designs, which no longer need to deal with signal-integrity issues and controlled-impedance traces for such high-speed routes.

 

Samtec has now posted the 2-minute video from OFC 2017 on YouTube and here it is:

 

 

 

 

 

Note: Martin Rowe recently published a related technical article about the Samtec FireFly system titled "High-speed signals jump over PCB traces" on the EDN.com Web site.

 

Got 90 seconds to see a 56Gbps demo with an instant 2x upgrade from 28G to 56G backplane? Good!

by Xilinx Employee ‎04-05-2017 11:23 AM - edited ‎04-06-2017 05:50 AM (13,366 Views)

 

Here’s a 90-second video showing a 56Gbps Xilinx test chip with a 56Gbps PAM4 SerDes transceiver operating with plenty of SI margin and better than 10-12 error rate over a backplane originally designed for 28Gbps operation.

 

Note: This working demo employs a Xilinx test chip. The 56Gbps PAM4 SerDes is not yet incorporated into a product. Not yet.

 

 

 

 

For more information about this test chip, see “3 Eyes are Better than One for 56Gbps PAM4 Communications: Xilinx silicon goes 56Gbps for future Ethernet.”

 

 

Jump into the Kintex UltraScale+ pool with both feet with the new $2995 KCU116 Eval Kit

by Xilinx Employee ‎03-31-2017 02:07 PM - edited ‎04-14-2017 05:15 PM (14,282 Views)

 

Today, Xilinx posted information about the new $2995 Kintex UltraScale+ KCU116 Eval Kit on Xilinx.com. If you’re looking to get into the UltraScale+ FPGAs’ GTY transceiver races—to 32.75Gbps—this is a great kit to start with. The kit includes:

 

  • KCU116 board with the Kintex UltraScale+ XCKU5P FPGA
  • 1x 28Gbps SFP28 electrical loopback module
  • Voucher providing access to a full seat of Vivado® Design Suite: Design Edition
    • Node-locked and device-locked to the XCKU5P
  • Accessories including USB cables, power, etc.

 

 

Here’s a nice shot of the KCU116 board from the kit’s quickstart guide:

 

 

KU116 Eval Board.jpg

 

 

Kintex UltraScale+ KCU116 Eval Board

 

 

 

One of the key features of this board are the four SFP+ optical cages there on the left. Those handle 25Gbps optical modules, driven of course by four of the KU5P FPGA’s GTY transceivers.

 

Take a look.

 

 

 

InnoRoute has just started shipping its TrustNode extensible, ultra-low-latency (2.5μsec) IPv6 OpenFlow SDN router as a pcb-level product. The design combines a 1.9GHz, quad-core Intel Atom processor running Linux with a Xilinx FPGA to implement the actual ultra-low-latency router hardware. (You’re not implementing that as a Linux app running on an Atom processor!) The TrustNode Router reference design features twelve GbE ports. Here’s a photo of the TrustNode SDN Router board:

 

 

InnoRoute TrustNode SDN Router Board.jpg 

 

InnoRoute TrustNode SDN Router Board with 12 GbE ports

 

 

Based on the pcb layout in the photo, it appears to me that the Xilinx FPGA implementing the 12-port SDN router is under that little black heatsink in the center of the board nearest to all of the Ethernet ports while the quad-core processor running Linux must be sitting there in the back under that great big silver heatsink with an auxiliary cooling fan, near the processor-associated USB ports and SDcard carrier.

 

InnoRoute’s TrustNode Web page is slightly oblique as to which Xilinx FPGA is used in this design but the description sort of winnows the field. First, the description says that you can customize InnoRoute’s TrustNode router design using the Xilinx Vivado HL Design Suite WebPACK Edition—which you can download at no cost—so we know that the FPGA must be a 28nm series 7 device or newer. Next, the description says that the design uses 134.6k LUTs, 269.2k flip-flops, and 12.8Mbits of BRAM. Finally, we see that the FPGA must be able to handle twelve Gigabit Ethernet ports.

 

The Xilinx FPGA that best fits this description is an Artix-7 A200.

 

You can use this TrustNode board to jump into the white-box SDN router business immediately, or at least as fast as you can mill and drill an enclosure and screen your name on the front. In fact, InnoRoute has kindly created a nice-looking rendering of a suggested enclosure design for you:

 

 

InnoRoute TrustNode Router Enclosure.jpg

 

 

InnoRoute TrustNode SDN Router (rendering)

 

 

The router’s implementation as IP in an FPGA along with the InnoRoute documentation and the Vivado tools mean that you can enhance the router’s designs and add your special sauce to break out of the white box. (White Box Plus? White Box Permium? White Box Platinum? Hey, I’m from marketing and I’m here to help.)

 

This design enhancement and differentiation are what Xilinx All Programmable devices are especially good at delivering. You are not stuck with some ASSP designer’s concept of what your customers need. You can decide. You can differentiate. And you will find that many customers are willing to pay for that differentiation.

 

 

Note: Please contact InnoRoute directly for more information on the TrustNode SDN Router.

 

Next week at OFC 2017 in Los Angeles, Acacia Communications, Optelian, Precise-ITC, Spirent, and Xilinx will present the industry’s first interoperability demo supporting 200/400GbE connectivity over standardized OTN and DWDM. Putting that succinctly, the demo is all about packing more bits/λ, so that you can continue to use existing fiber instead of laying more.

 

Callite-C4 400GE/OTN Transponder IP from Precise-ITC instantiated in a Xilinx Virtex UltraScale+ VU9P FPGA will map native 200/400GbE traffic—generated by test equipment from Spirent—into 2x100 and 4x100 OTU4-encapsulated signals. The 200GbE and 400GbE standards are still in flux, so instantiating the Precise-ITC transponder IP in an FPGA allows the design to quickly evolve with the standards with no BOM or board changes. Concise translation: faster time to market with much less risk.

 

 

Precise-ITC Callite-4 IP.jpg

 

Callite-C4 400GE/OTN Transponder IP Block Diagram

 

 

 

Optelian’s TMX-2200 200G muxponder, scheduled for release later this year, will muxpond the OTU4 signals into 1x200Gbps or 2x200Gbps DP-16QAM using Acacia Communications’ CFP2-DCO coherent pluggable transceiver.

 

 

The Optelian and Precise-ITC exhibit booths at OFC 2017 are 4139 and 4141 respectively.

 

 

 

Next week at the OFC Optical Networking and Communication Conference & Exhibition in Los Angeles, Xilinx will be in the Ethernet Alliance booth demonstrating the industry’s first, standard-based, multi-vendor 400GE network. A 400GE MAC and PCS instantiated in a Xilinx Virtex UltraScale+ VU9P FPGA will be driving a Finisar 400GE CFP8 optical module, which in turn will communicate with a Spirent 400G test module over a fiber connection.

 

In addition, Xilinx will be demonstrating:

 

 

PAM4 Eye.jpg

 

 

  • The world’s first complete FlexE 1.0 solution showcasing bonding, sub-rating and channelization on UltraScale+ FPGAs.

 

  • LLDP packet snooping on transport line cards to allow SDN controllers to build network topology maps, which aid data-center network automation.

 

  • Optical technology abstraction in DCI transport.

 

If you’re visiting OFC, be sure to stop by the Xilinx booth (#1809).

 

 

 

Berten DSP’s GigaX API for the Xilinx Zynq SoC creates a high-speed, 200Mbps full-duplex communications channel between a GbE port and the Zynq SoC’s PS (programmable logic) through an attached SDRAM buffer and an AXI DMA controller IP block. Here’s a diagram to clear up what’s happening:

 

 

Berten GigaX API for Zynq SoC.jpg 

 

 

The software API implements IP filtering and manages TCP/UDP headers, which help you implement a variety of hardware-accelerated Ethernet systems including Ethernet bridges, programmable network nodes, and network offload appliances. Here’s a performance curve illustrating the kind of throughput you can expect:

 

 

Berten GigaX API for Zynq SoC Throughput.jpg 

 

 

Please contact Berten DSP directly for more information about the GigaX API.

 

 

 

 

Danger 100GbE.jpgAs the BittWare video below explains, CPUs are simply not able to process 100GbE packet traffic without hardware acceleration. BittWare’s new Streamsleuth, to be formally unveiled at next week’s RSA Conference in San Francisco (Booth S312), adroitly handles blazingly fast packet streams thanks to a hardware assist from an FPGA. And as the subhead in the title slide of the video presentation says, StreamSleuth lets you program its FPGA-based packet-processing engine “without the hassle of FPGA programming.”

 

(Translation: you don’t need Verilog or VHDL proficiency to get this box working for you. You get all of the FPGA’s high-performance goodness without the bother.)

 

 

 

BittWare StreamSleuth.jpg 

 

 

That said, as BittWare’s Network Products VP & GM Craig Lund explains, this is not an appliance that comes out of the box ready to roll. You need (and want) to customize it. You might want to add packet filters, for example. You might want to actively monitor the traffic. And you definitely want the StreamSleuth to do everything at wire-line speeds, which it can. “But one thing you do not have to do, says Lund, “is learn how to program an FPGA.” You still get the performance benefits of FPGA technology—without the hassle. That means that a much wider group of network and data-center engineers can take advantage of BittWare’s StreamSleuth.

 

As Lund explains, “100GbE is a different creature” than prior, slower versions of Ethernet. Servers cannot directly deal with 100GbE traffic and “that’s not going to change any time soon.” The “network pipes” are now getting bigger than the server’s internal “I/O pipes.” This much traffic entering a server this fast clogs the pipes and also causes “cache thrash” in the CPU’s L3 cache.

 

Sounds bad, doesn’t it?

 

What you want is to reduce the network traffic of interest down to something a server can look at. To do that, you need filtering. Lots of filtering. Lots of sophisticated filtering. More sophisticated filtering than what’s available in today’s commodity switches and firewall appliances. Ideally, you want a complete implementation of the standard BPF/pcap filter language running at line rate on something really fast, like a packet engine implemented in a highly parallel FPGA.

 

The same thing holds true for attack mitigation at 100Gbe line rates. Commodity switching hardware isn’t going to do this for you at 100GbE (10GbE yes but 100GbE, “no way”) and you can’t do it in software at these line rates. “The solution is FPGAs” says Lund, and BittWare’s StreamSleuth with FPGA-based packet processing gets you there now.

 

Software-based defenses cannot withstand Denial of Service (DoS) attacks at 100GbE line rates. FPGA-accelerated packet processing can.

 

So what’s that FPGA inside of the BittWare Streamsleuth doing? It comes preconfigured for packet filtering, load balancing, and routing. (“That’s a Terabit router in there.”) To go beyond these capabilities, you use the BPF/pcap language to program your requirements into the the StreamSleuth’s 100GbE packet processor using a GUI or APIs. That packet processor is implemented with a Xilinx Virtex UltraScale+ VU9P FPGA.

 

Here’s what the guts of the BittWare StreamSleuth look like:

 

 

BittWare StreamSleuth Exploded Diagram.jpg 

 

And here’s a block diagram of the StreamSleuth’s packet processor:

 

 

 

BittWare StreamSleuth Packet Processor Block Diagram.jpg 

 

 

The Virtex UltraScale+ FPGA resides on a BittWare XUPP3R PCIe board. If that rings a bell, perhaps you read about that board here in Xcell Daily last November. (See “BittWare’s UltraScale+ XUPP3R board and Atomic Rules IP run Intel’s DPDK over PCIe Gen3 x16 @ 150Gbps.”)

 

Finally, here’s the just-released BittWare StreamSleuth video with detailed use models and explanations:

 

 

 

 

 

 

 

 

For more information about the StreamSleuth, contact BittWare directly or go see the company’s StreamSleuth demo at next week’s RSA conference. For more information about the packet-processing capabilities of Xilinx All Programmable devices, click here. And for information about the new Xilinx Reconfigurable Acceleration Stack, click here.

 

 

 

 

 

 

Accolade’s newly announced ATLAS-1000 Fully Integrated 1U OEM Application Acceleration Platform pairs a Xilinx Kintex UltraScale KU060 FPGA on its motherboard with an Intel x86 processor on a COM Express module to create a network-security application accelerator. The ATLAS-1000 platform integrates Accolade’s APP (Advanced Packet Processor), instantiated in the Kintex UltraScale FPGA, which delivers acceleration features for line-rate packet processing including lossless packet capture, nanosecond-precision packet timestamping, packet merging, packet filtering, flow classification, and packet steering. The platform accepts four 10G SFP+ or two 40G QSFP pluggable optical modules. Although the ATLAS-1000 is designed as a flow-through security platform, especially for bump-in-the-wire applications, there’s also 1Tbyte worth of on-board local SSD storage.

 

 

 Accolade ATLAS-1000.jpg

 

Accolade Technology's ATLAS-1000 Fully Integrated 1U OEM Application Acceleration Platform

 

 

 

Here’s a block diagram of the ATLAS-1000 platform:

 

 

ATLAS-1000 Platform Block Diagram.jpg 

 

All network traffic enters the FPGA-based APP for packet processing. Packet data is then selectively forwarded to the x86 CPU COM Express module depending on the defined application policy.

 

 

Please contact Accolade Technology directly for more information about the ATLAS-1000.

 

 

 

Aquantia has packed its Ethernet PHY—capable of operating at 10Gbps over 100m of Cat 6a cable (or 5Gbps down to 100Mbps over 100m of Cat 5e cable)—with a Xilinx Kintex-7 FPGA, creating a universal Gigabit Ethernet component with extremely broad capabilities. Here’s a block diagram of the new AQLX107 device:

 

 

Aquantia AQLX107 PHY Block Diagram.jpg 

 

 

This Aquantia device gives you a space-saving, one-socket solution for a variety of Ethernet designs including controllers, protocol converters, and anything-to-Ethernet bridges.

 

Please contact Aquantia for more information about this unique Ethernet chip.

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.