UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Samtec introduces 140Gbps Optical FMC module based on two 14Gbps FireFly micro-flyover optical modules

by Xilinx Employee ‎07-24-2017 03:58 PM - edited ‎07-24-2017 04:06 PM (323 Views)

 

Perhaps you’ve been intrigued by Samtec’s FireFly optical micro-flyover communications technology—which is capable of carrying as many as ten 14Gbps serial data streams over low-cost optical ribbon cable—but you didn’t want to try out the technology by designing the FireFly sites into a board. Well, Samtec’s just fixed that problem for you by introducing its VITA 57.1-compliant, 14Gbps FireFly FMC Module and Development Kit with 140Gbps of full-duplex bandwidth distributed over two 10-fiber, multi-mode optical ribbon cables connected to two on-board FireFly optical modules that link an FMC HPC connector to an industry-standard, 24-fiber MTP/MPO optical connector. Snap one into an appropriate Xilinx dev board, for example, and you have an instant 140Gbps, full-duplex optical link.

 

 

Samtec 140Gbps FireFly Optical FMC Module.jpg
 

 

14Gbps FireFly FMC Module with 140Gbps of full-duplex bandwidth

 

 

 

This type of interconnect pairs well, for example, with the 16.3Gbps GTH transceivers found on various Xilinx All Programmable UltraScale devices including Virtex UltraScale, Kintex UltraScale, and Kintex UltraScale+ FPGAs and Zynq UltraScale+ MPSoCs. Dev boards for these devices feature FMC connectors compatible with the 14Gbps FireFly FMC Module.

 

For more information about the FireFly Optical Flyover system, see:

 

 

 

 

 

 

Here’s an inspiring short video from National Instruments (NI) where educators from Georgia Tech, the MIT Media Lab, the University of Manchester, and the University of Waterloo discuss using a variety of NI products to inspire students, pique their curiosity, and foster deeper understanding of many complex engineering concepts while thoroughly disguising all of it as fun. Among the NI products shown in this 2.5-minute video are several products based on Xilinx All Programmable devices including:

 

 

 

 

Here’s the video:

 

 

 

 

 

 

For more information about these Xilinx-based NI products, see:

 

 

 

 

 

 

 

 

 

The latest “Powered by Xilinx” video, published today, provides more detail about the Perrone Robotics MAX development platform for developing all types of autonomous robots—including self-driving cars. MAX is a set of software building blocks for handling many types of sensors and controls needed to develop such robotic platforms.

 

Perrone Robotics has MAX running on the Xilinx Zynq UltraScale+ MPSoC and relies on that heterogeneous All Programmable device to handle the multiple, high-bit-rate data streams from complex sensor arrays that include lidar systems and multiple video cameras.

 

Perrone is also starting to develop with the new Xilinx reVISION stack and plans to both enhance the performance of existing algorithms and develop new ones for its MAX development platform.

 

Here’s the 4-minute video:

 

 

 

By Adam Taylor

 

Connecting the low-cost, Zynq-based Avnet MiniZed dev board connected to our WiFi network allows us to transfer files between the board and our development environment quickly and easily. I will use WinSCP—a free, open-source SFTP client, FTP client, WebDAV client, and SCP client for Windows—to do this because it provides an easy-to-use, graphical method to upload files.

 

If we have power cycled or reset our MiniZed between enabling the WiFi as in the previous blog and connecting to it using WinSCP, we will need to rerun the WiFi setup script. LED D10 on the MiniZed board will be lit when WiFi is enabled. Once we are connected to the WIFI network, we can use WinSCP to remotely log in. In the example below, the MiniZed had the address of 192.168.1.159 on my network. The username and password to log in are the same as for the log in over the terminal. Both are set to root.

 

 

Image1.jpg 

 

Connecting the MiniZed to the WiFi network

 

 

Once we are connected with WinSCP, we can see the file systems on both our host computer and the MiniZed. We can simply drag and drop files between the two file systems to upload or download files. It can’t get much easier than this until we develop mind-reading capabilities for Zynq-based products. What we need now is a simple program we can use to prove the setup.

 

 

 

Image2.jpg

 

WinSCP connected and able to upload and download files

 

 

To create a simple program, we can use SDK targeting the Zynq SoC’s A9 processor. There is also a “hello world” program template that can use as the basis for our application. Within SDK, create a new project (File ->New->Application Project) as shown in the images below, this will create a simple “hello world” application.

 

 

 Image3.jpg

 

 

Image4.jpg 

 

 

Opening the helloworld.c file within the created application allows you to customize the program if you so desire.

 

Once you are happy with your customization, your next step is to build the file, which will result in an ELF file. we can then upload this ELF file to the MiniZed using WinSCP and use the terminal to run our first example. Make sure to set the permissions for read, write, and execute when uploading the file to the MiniZed dev board.

 

Within the terminal window, we can then run the application by executing it using the command:

 

./<project_name>.elf

 

When I executed this command, I received the following in response that proved everything was working as expected:

 

 

Image5.jpg 

 

 

Once we have this simple program running successfully, we can create a more complex programs for various applications including ones that use the MiniZed dev board’s WiFi networking capabilities. To do this we need to use sockets, which we will explore in a future blog.

 

Having gotten the MiniZed board’s WiFi up and running and loading a simple “hello world” program, we now turn our attention to the board’s Bluetooth wireless capability, which we have not yet enabled. We enable Bluetooth networking in a similar manner to WiFi networking. Navigate to /usr/local/bin/ and perform a LS command. In the results, you will see not only the script we used to turn on WiFi (WIFI.sh) but also a script file named BT.sh for turning on Bluetooth. Running this script turns on the Bluetooth. You will see a blue LED D9 illuminate on the MiniZed board when Bluetooth is enabled and within the console window, you will notice that the Bluetooth feature configures and starts scanning. If there is a discoverable Bluetooth device in the area, then you will see it listed. In the example below, you can see my TV.

 

 

Image7.jpg 

 

 

If we have another device that we wish to communicate with, re-running the same script will cause an issue. Instead, we use the command hcitool scan:

 

 

 

Image8.jpg 

 

 

Running this command after making my mobile phone discoverable resulted in my Samsung S6 Edge phone being added to the list of Bluetooth devices.

 

Now we know how to enable both the WiFi and Bluetooth on the MiniZed board, how to write our own program, and upload it to the MiniZed.

 

 

In future blogs, we will look at how we can transfer data using both the Bluetooth and WiFi in our applications.

 

 

Code is available on Github as always.

 

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

MicroZed Chronicles Second Year.jpg 

 

 

 

 

Xilinx 7 series FPGAs have 50-pin I/O banks with one common supply voltage for all 50 pins. The smaller Spartan-7 FPGAs have 100 I/O pins in two I/O banks, so it might be convenient in some smaller designs (or even some not-so-small designs) to combine the I/O for configuration and DDR memories into one FPGA I/O bank (plus the dedicated configuration bank 0) if possible so that the remaining I/O bank can operate at a different I/O voltage.

 

It turns out, you can do this with some MIG (Memory Interface Generator) magic, a little Vivado tool fiddling, and a simple level translator for the Flash memory’s data lines.

 

Application note XAPP1313 titled “Spartan-7 FPGA Configuration with SPI Flash and Bank 14 at 1.35V” shows you how to do this with a 1.8V Quad SPI Flash memory and 1.35V DDR3L SDRAM. Here’s a simplified diagram of what’s going on:

 

 

XAPP1313 Figure 1.jpg 

 

 

 

The advantage here is that you don’t need to move up to a larger FPGA to get another I/O bank.

 

For step-by-step instructions, see XAPP1313.

 

 

 

 

 

If you’re developing FPGA-based designs using the Spartan-6 family and would like to rehost on Windows 10, keep reading. ISE 14.7 now runs on Windows 10. You’ll need to download ISE 14.7 for Spartan-6 devices on Windows 10 using the instructions in this 3-minute video, which walks you through the process:

 

 

 

 

 

 

 

Green Hills Software has announced that it has been selected by a US supplier of guidance and navigation equipment for commercial and military aircraft to provide its DO-178B Level A-compliant real-time multicore operating system for next-generation of equipment based on the Xilinx Zynq Ultrascale+ MPSoC. The Zynq Ultrascale+ MPSoC’s four 64-bit ARM Cortex-A53 processor cores will run Green Hills Software's INTEGRITY-178 Time-Variant Unified Multi Processing (tuMP) safety-critical operating system. The Green Hills INTEGRITY-178 tuMP RTOS has been shipping to aerospace and defense customers since 2010. INTEGRITY-178 tuMP supports ARINC-653 Part 1 Supplement 4 standard (including section 2.2.1 – SMP operation), as well as the Part 2 optional features including Sampling Port Data Structures, Sampling Port Extensions, Memory Blocks, Multiple Module Schedules, and File System and offers advanced options such as a DO-178B Level A-compliant network stack.  

 

Linux provides a number of mechanisms that allow you to interact with FPGA bitstreams without using complex kernel device drivers. This feature allows you to develop and test your programmable hardware using simple Linux user-space applications. This free training Webinar by Doulos will review your options and examine their pros and cons.

 

Webinar highlights:

 

  • Find out how programmable logic is represented in the device tree
  • Explore the Linux user space mechanisms for FPGA I/O
  • Understand the best use of these methods

 

The concepts will be explored in the context of Xilinx Zynq SoCs and Zynq UltraScale+ MPSoCs.

 

Doulos’ Senior Member of Technical Staff Simon Goda will present this webinar on August 4 and will moderate live Q&A throughout the broadcast. There are two Webinar broadcasts to accommodate different time zones.

 

Register here.

 

 

 

 

 

Earlier this year, the University of New Hampshire’s InterOperability Laboratory (UNH-IOL) gave a 25G and 50G Plugfest and everybody came to the party to test compatibility of their implementations with each other. The long list of partiers included:

 

 

  • Arista
  • Amphenol
  • Cisco
  • Dell Delta
  • HPE
  • Hitachi
  • Intel
  • Ixia
  • Marvell
  • Mellanox
  • Microsoft
  • Netronome
  • Qlogic
  • Spirent
  • Teledyne-LeCroy
  • Xilinx

 

 

You can find these companies’ names, the equipment they tested, and the speeds they tested on the 25 Gigabit Ethernet Consortium’s Web site’s Integrator’s List. From that site:

 

“The 25 Gigabit Ethernet Consortium is an open organization to all third parties who wish to participate as members to enable the transmission of Ethernet frames at 25 or 50 Gigabit per second (Gbps) and to promote the standardization and improvement of the interfaces for applicable products.”

 

From the Consortium’s press release about the plugfest:

 

“The testing demonstrated a high degree of multi-vendor interoperability and specification conformance.”

 

 

For its part, Xilinx tested its 10/25G High-Speed Ethernet LogiCORE IP and 40/50G High-Speed Ethernet LogiCORE Subsystem IP using the Xilinx VCU108 Eval Kit based on a Virtex UltraScale XCVU095-2FFVA2104E FPGA over copper using different cable lengths. Consortium rules do not permit me to tell you which companies interoperated with each other, but I can say that Xilinx tested against every company on the above list. I’m told that the Xilinx 25G/50G receiver “did well.”

 

 

 

Xilinx VCU108 Eval Kit.jpg 

 

 

Xilinx Virtex UltraScale VCU108 Eval Kit

 

 

 

 

 

 

Last month, I wrote about Perrone Robotic’s Autonomous Driving Platform based on the Zynq UltraScale+ MPSoC. (See “Linc the autonomous Lincoln MKZ running Perrone Robotics' MAX AI takes a drive in Detroit without puny humans’ help” and “Perrone Robotics builds [Self-Driving] Hot Rod Lincoln with its MAX platform, on a Zynq UltraScale+ MPSoC.”) That platform runs on a controller box supplied by iVeia. In the 2-minute video below, iVeia’s CTO Mike Fawcett describes the attributes of the Zynq UltraScale+ MPSoC that make it a superior implementation technology for autonomous driving platforms. The Zynq UltraScale+ MPSoC’s immense, heterogeneous computing power supplied by six ARM processors plus programmable logic and a few more programmable resources flexibly delivers the monumental amount of processing required for vehicular sensor fusion and real-time perception processing while consuming far less power and generating far less heat than competing solutions involving CPUs or GPUs.

 

Here’s the video:

 

 

 

 

 

 

If you have read Adam Taylor’s 200+ MicroZed Chronicles here in Xcell Daily, you already know Adam to be an expert in the design of systems based on programmable logic, Zynq SoCs, and Zynq UltraScale+ MPSoCs. But Adam has significant expertise in the development of mission-critical systems based on his aerospace engineering work. He gave a talk about this topic at the recent FPGA Kongress held in Munich and he’s kindly re-recorded his talk, combined with slides in the following 67-minute video.

 

Adam spends the first two-thirds of the video talking about the design of mission-critical systems in general and then spends the rest of the time talking about Xilinx-specific mission-critical design including the design tools and the Xilinx isolation design flow.

 

Here’s the video:

 

 

 

 

 

A 5G NR (New Radio) Progress Report from the NI 5G Innovation Lab

by Xilinx Employee ‎07-17-2017 11:25 AM - edited ‎07-17-2017 11:28 AM (1,719 Views)

 

There’s a lot of 5G research already taking place at National Instruments’ (NI’s) new 5G Innovation Lab located in Austin, Texas (announced in May) and RCR Wireless News’ Martha DeGrasse recently published a report about the lab on the publication’s Web site. In this 5G Innovation Lab, NI’s proprietary T&M equipment and software are being used by carriers, chipmakers, and equipment vendors including AT&T, Verizon, Ericsson, and Intel to develop and test 5G hardware and protocols.

 

One of the research projects DeGrasse describes involves Verizon’s 5GTF—V5GTF, the Verizon 5G Technology Forum—which is developing a 28/39GHz wireless communications platform designed to replace fiber in fixed-wireless applications. There’s a running demo of this technology in the NI 5G Research Lab that uses a 28GHz link to convey a 3Gbps digital stream between a simulated basestation and a simulated fixed-location user device. Here’s a brand new, 2-minute video of a demo:

 

 

 

 

The equipment used in this V5GTF demo includes NI’s mmWave Transceiver System which includes FPGA processing modules based on Xilinx Virtex-7 and Kintex-7 FGPAs. The FPGA processing modules handle the complex, still-in-development modulation and control protocols being developed for mmWave communications.

 

Adam Taylor’s MicroZed Chronicles, Part 207: Setting up MiniZed WIFI and Bluetooth Connectivity

by Xilinx Employee ‎07-17-2017 10:34 AM - edited ‎07-18-2017 02:54 PM (2,665 Views)

 

By Adam Taylor

 

So far on our journey, every Zynq SoC and Zynq UltraScale+ MPSoC we have looked at has had two or more ARM microprocessor cores. However, I recently received the new Avnet MinZed dev board based on a Zynq Z-7007S SoC. This board is really exciting for several reasons. It is the first board we’ve looked at that’s based on a single-core Zynq SoC. (It has one ARM Cortex-A9 processor core that runs as fast as 667MHz in the speed grade used on the board.) And like the snickerdoodle, it comes with support for WIFI and Bluetooth. This is a really interesting board and it sells for a mere $89 in the US.

 

Xilinx designed the single-core Zynq for cost-optimized and low-power applications. In fact, we have been using just a single core for most of the Zynq-based applications we have looked at over this series unless we have been running Linux, exploring AMP, or looking at OpenAMP. One processor core is still sufficient for many, many applications.

 

The MiniZed dev board itself comes with 512Mbytes of DDR3L SDRAM, 128Mbits of QSPI flash memory, and 8Gbytes of eMMC flash memory. When it comes to connectivity, in addition to the wireless links, the MiniZed board also provides two PMOD interfaces and an Arduino/ChipKit Shield connector. It also provides an on-board temperature sensor, accelerometer and microphone.

 

Here’s a block diagram of the MiniZed dev board:

 

 

Image1.jpg

 

 

 

Thanks to its connectivity, its capabilities and low cost make the MiniZed board ideal for a range of applications, especially those applications that fall within the Internet of Things and Industrial Internet of Things domains.

 

When we first open the box, the MiniZed board comes preinstalled with a PetaLinux image loaded into the QSPI flash memory. This has a slight limitation as the QSPI flash is not large enough to host a PetaLinux image with both a Bluetooth and WIFI stack. Only the WIFI stack is present in the out-of-the-box condition. If we want to use the Bluetooth—and we do—we need to connect over WIFI and upload a new boot loader so that we can load a full-featured PetaLinux image from the eMMC flash. The first challenge of course is to connect over WIFI. We will look at that in the rest of this blog.

 

The first step is to download the demo application files from the MiniZed Website. This provides us with the following files which we need to use in the demo:

 

  • bin – a boot loader used to load the boot image from eMMC Flash
  • ub – PetaLinux with the Bluetooth stack
  • conf – Configuration file where we can define the WIFI SSID and Key

 

To correctly set up the MiniZed for our future adventures, we will also need a USB memory stick. On our host PC, we need to open the file wpa_supplicant.conf using a program like notepad++. We then add our network’s SSID and PSK so that the MiniZed can connect to our network. Once this is done, we save the file to the USB memory stick’s root.

 

 

Image2.jpg

 

Setting the WIFI SSID and PSK

 

 

The next step is to power on the MiniZed board and connect to a PC using a USB cable from the computer’s USB port to the MiniZed board’s JTAG UART connector. Connect a second USB cable from the MiniZed’s auxiliary input connector for power. We need to do this because of the USB port’s current supply limits. Without the auxiliary USB cable, we can’t be sure that the memory stick can be powered correctly when plugged into the MiniZed board.

 

Press the MiniZed board’s reset button and you should see the Linux OS boot in your terminal screen. Once booted, log in with the password and username of root.

 

We can then plug in the USB memory stick. The MiniZed board should discover the USB memory stick and you should see it reported in the terminal window:

 

 

Image3.jpg

 

Memory Stick detection

 

 

 

To log on to our WIFI network, we need to copy this file to the eMMC. To do this, we issue the following commands via the terminal.

 

Image4.jpg

 

These commands change the directory to the eMMC and erases anything within it before changing directory to the USB memory stick and listing the contents, where we should see our wpa_supplicant.conf file.

 

The next step is to copy the file from the USB memory stick to the eMMC and check that it has been copied correctly:

 

Image5.jpg 

 

We are then ready to start the WIFI we can do this by navigating to

 

 

Image7.jpg 

 

You should see this:

 

 

Image6.jpg 

 

 

 

Now we are connected to the WIFI we can enable the blue tooth and transfer files wirelessly which we will look at next time.

 

 

 

Code is available on Github as always.

 

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

Curtiss-Wright’s Rugged, Pre-Architected, Mission-Specific Computer System Targets SDR, EW Applications

by Xilinx Employee ‎07-14-2017 02:48 PM - edited ‎07-14-2017 02:53 PM (1,899 Views)

 

Curtiss-Wright has just introduced the first member in a new family of rugged, pre-integrated, mission-specific management computer systems based on its line of COTS modules. This first family member targets SDR (software-defined radio) and EW (electronic warfare applications) and so it packs as many as four of the company’s VPX3-530 High-Speed Transceiver modules with either four 12-bit, 2Gsamples/sec or two 12-bit, 4Gsamples/sec ADCs and two 14-bit, 2.8Gsamples/sec (5.6Gsamples/sec with interpolation) DACs controlled by a Xilinx Virtex-7 690T FPGA that also provides user-programmable, real-time signal processing. An Intel-based processor board, clock generator module, and power supply complete the system’s electronic hardware complement and all modules are housed in a rugged MPMC-9354 chassis that looks like this:

 

 

 

Curtiss-Wright MPMC-9354 Rugged Chassis.jpg 

 

 

Curtiss-Wright MPMC-9354 Rugged Chassis

 

 

The system is fully qualified to MIL-STD-704F, MIL-STD-810, and MIL-STD-461 testing.

 

Here’s what the inside of the chassis looks like:

 

 

Curtiss-Wright MPMC-9354 Interior Detail.jpg 

 

 

The VPX3-530 High-Speed Transceiver module supports phase-coherent ADC sampling and DAC output across multiple cards when used with the XCLK1 synchronous clock source. Here’s a photo of the VPX-530 module, prominently showing the Virtex-7 690T FPGA:

 

 

Curtiss-Wright VPX-530 High-Speed Transceiver Module.jpg

 

Curtiss-Wright VPX3-530 High-Speed Transceiver module

 

 

 

And here’s a block diagram of the VPX3-530 module showing the FPGA, ADCs, DACs, and the module’s two banks of DDR3 SDRAM:

 

 

Curtiss-Wright VPX-530 High-Speed Transceiver Module Block Diagram.jpg

 

 

The digital section is essentially the Virtex-7 FPGA, which implement’s all of the module’s logic, the SDRAM, and the non-volatile memory.

 

 

 

Today, Mentor announced that it is making the Android 6.0 (Marshmallow) OS for the Xilinx Zynq UltraScale+ MPSoC along with pre-compiled binaries for the ZCU102 Eval Kit (currently on sale for half off, or $2495). This Android implementation includes the Mentor Android 6.0 board support package (BSP) built on the Android Open Source Project. The Android software is available for immediate, no-charge download directly from the Mentor Embedded Systems Division.

 

You need to file a download request with Mentor to get access.

 

Maybe you thought that VadaTech’s AMC597 300MHz-to-6GHz Octal Versatile Wideband Transceiver, which connects four AD9371 chips over JESD204B high-speed serial interfaces with a Xilinx Kintex UltraScale KU115 FPGA (the UltraScale DSP monster with 5520 DSP48E2 slices) and three banks of DDR4 SDRAM (two 8Gbyte banks and one 4Gbyte bank for a total of 20Gbytes), was cool but you’re not developing radios. Well, VadaTech now has another way for you to get a Kintex UltraScale KU115 FPGA on an AMC module. It’s called the AMC583 FPGA Dual FMC+ Carrier and it teams the UltraScale DSP monster with an NXP (formerly Freescale) QorIQ P2040 quad-core PowerPC processor and 8Gbytes of DDR4 SDRAM in two separate banks. The QorIQ processor and the UltraScale FPGA communicate over a high-speed 4-lane PCIe interface as well as the processor’s local bus. Two on-board FMC+ sites connect to the Kintex UltraScale FPGA and permit easy expansion.

 

Here’s a block diagram of VadaTech’s AMC583:

 

 

VadaTech AMC583 Block Diagram.jpg 

 

VadaTech AMC583 Block Diagram

 

 

 

If you need high-speed analog I/O capabilities, VadaTech has also just announced the FMC250, an FMC mezzanine module with two 12-bit 2.6Gsamples/sec ADCs and one 16-bit, 12Gsamples/sec DAC.

 

 

CCIX 3D bug.jpg 

CCIX (the “cache-coherent interconnect for accelerators,” pronounced “see-six”), is a new, high-speed, chip-to-chip I/O protocol being developed by the CCIX Consortium. It’s based on the ubiquitous PCIe protocol, which means it can leverage PCIe’s existing, low-cost hardware infrastructure but it can go faster—a lot faster. While PCIe 4.0 (just starting to roll out) operates at a maximum rate of 16GTransfers/sec—that’s about 64Gbytes/sec bidirectionally on a 16-lane link—CCIX takes the signaling to 25GTransfer/sec, which approaches 100Gbytes/sec bidirectionally over the same 16 lanes. For compatibility, CCIX connections initialize as PCIe connections, thus maintaining PCIe protocol compatibility, but then permit a bootstrap mechanism where two connected CCIX devices can agree to stomp on the I/O accelerator pedal for a 56% speed boost using the same hardware.

 

All of this and more is explained in a new, easy-to-read technical bulletin posted by Synopsys titled “An Introduction to CCIX.”

 

Synopsys is a CCIX Contributor and Xilinx is a CCIX Promoter—both members of the CCIX Consortium at different membership levels. Xilinx is intensely interested in I/O protocols like CCIX to permit ever-faster communications between fast processor arrays and even faster FPGA-based accelerators and is looking forward to the first products with CCIX interconnect sampling later this year.

 

For more information about CCIX, see:

 

 

 

 

 

 

 

 

 

 

 

I’ve written about SDRs (software-defined radios) built with Analog Devices’ AD9371 dual RF transceivers and Xilinx All Programmable devices before but never on the scale of VadaTech’s AMC597 300MHz-to-6GHz Octal Versatile Wideband Transceiver, which connects four AD9371 chips over JESD204B high-speed serial interfaces with a Xilinx Kintex UltraScale KU115 FPGA (the UltraScale DSP monster with 5520 DSP48 slices) and three banks of DDR4 SDRAM (two 8Gbyte banks and one 4Gbyte bank for a total of 20Gbytes). The whole system fits into an AMC form factor. Here’s a photo:

 

 

Vadatech AMC597.jpg

 

VadaTech AMC597 300MHz-to-6GHz Octal Versatile Wideband Transceiver

 

 

 

It’s essentially a solid block of raw SDR capability jammed into a compact, 55W (typ) package. This programmable powerhouse has the RF and processing capabilities you need to develop large, advanced digital radio systems using development tools from VadaTech, Analog Devices, and Xilinx. The AMC597 is compatible with Analog Devices’ design tools for AD9371; you can develop your own FPGA-based processing configuration with Xilinx’s Vivado Design Suite and System Generator for DSP; and VadaTech supplies reference designs with VHDL source code, documentation, and configuration binary files.

 

 

Adam Taylor’s MicroZed Chronicles, Part 206: Software for the Digilent Nexys Video Project

by Xilinx Employee ‎07-12-2017 10:21 AM - edited ‎07-12-2017 11:04 AM (2,929 Views)

 

By Adam Taylor

 

With the MicroBlaze soft processor system up and running on the Nexys Video Artix-7 FPGA Trainer Board, we need some software to generate a video output signal. In this example, we are going to use the MicroBlaze processor to generate test patterns. To do this, we’ll will write data into the Nexys board’s DDR SDRAM so that the VDMA can read this data and output it over HDMI.

 

The first thing we will need to do in the software is define the video frames, which are going to be stored in memory and output by the VDMA. To do this, we will define three frames within memory. We will define each frame as a two-dimensional array:

 

u8 frameBuf[DISPLAY_NUM_FRAMES][DEMO_MAX_FRAME];

 

Where DISPLAY_NUM_FRAME is set to 3 and DEMO_MAX_FRAME is set to 1920 * 1080 * 3. This takes into account the maximum frame resolution and the final multiplication by 3 accommodates each pixel (8 bits each for red, green, and blue).

 

To access these frames, we use an array of pointers to the each of the three frame buffers. Defining things this way eases our interaction with the frames.

 

With the frames defined, the next step it is to initialize and configure the peripherals within the design. These are:

 

  • VDMA – Uses DMA to move data from the board’s DDR SDRAM to the output video chain.
  • Dynamic Clocking IP – Outputs the pixel clock frequency and multiples of this frequency for the HDMI output.
  • Video Timing Controller 0 – Defines the output display timing depending upon resolution.
  • Video Timing Controller 1 – Determines the video timing on the input received. In this demo, this controller graba input frames from a source.

 

To ensure the VDMA functions correctly, we need to define the stride. This is the separation between each line within the DDR memory. For this application, the stride is 3 * 1920, which is the maximum length of a line.

When it comes to the application, we will be able to set different display resolutions from 640x480 to 1920x1080.

 

 

Image1.jpg 

 

 

No matter what resolution we select, we will be able to draw test patterns on the screen using software functions that write to the DDR SDRAM.  When we change functions, we will need to reconfigure the VDMA, Video Timing Generator 0, and the dynamic clocking module.

 

Our next step is to generate video output. With this example, there are many functions within the main application that generate, capture, and display video. These are:

 

  1. Bar Test Pattern – Generates several color bars across the screen
  2. Blended Test Pattern – Generates a blended color test pattern across the screen
  3. Streaming from the HDMI input to the output
  4. Grab an input frame and invert colors
  5. Grab an input frame and scale to the current display resolution

 

Within each of these functions we pass a pointer to the frame currently being output so that we can modify the pixel values in memory. This can be done simply as shown in the code snippet below, which sets the red, blue, and green pixels. Each pixel color value is unsinged 8 bits.

 

 

Image2.jpg 

 

 

When we run the application, we can choose which of the functions we want to exercise using the menu output over the UART terminal:

 

 

Image3.jpg 

 

 

 

Setting the program to output color bars and the blended test gave the outputs below on my display:

 

 

 

Image4.jpg 

 

 

Now we know how we can write information to DDR memory and see it appear on our display. We could generate a Mandelbrot pattern using this approach pretty simply and I will put that on my list of things to cover in a future blog.

 

 

Code is available on Github as always.

 

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 MicroZed Chronicles hardcopy.jpg

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

 MicroZed Chronicles Second Year.jpg

 

 

 

 

Mindray, one of the world’s top medical ultrasound vendors, believes that the ZONE Sonography Technology (ZST+) in its cart-based Resona 7 premium color ultrasound system delivers unprecedented ultrasound imaging quality that help doctors non-invasively peer into their patients with much better clarity, which in turn helps them achieve a deeper understanding of the images and deliver better, more accurate diagnoses than was previously possible. According to the company, ZST+ takes medical ultrasound imaging from “conventional beamforming” to “channel data-based processing” that enhances images through advanced acoustic acquisition (10x faster than conventional line-by-line beamforming), dynamic pixel focusing (provides pixel uniformity from near field to far field), sound-speed compensation (allows for tissue variation), enhanced channel-data processing (improves image clarity), and total-recall imaging (permits retrospective processing of complete, captured data sets, further improving image clarity and reducing the need for repeated scanning).

 

 

 

Mindray Resona 7 Premium Ultrasound Imaging System v2.jpg

 

Mindray Resona 7 Premium Ultrasound System

 

 

Many of these advanced, real-time, ultrasound-processing and -imaging features are made possible by and implemented in a Xilinx Kintex-7 FPGA. For example, one of the advanced features enabled by ZST+ is “V Flow,” which can show blood flow direction and velocity using colored arrow overlays in an image with a refresh rate as fast as 600 images/sec. Here’s a mind-blowing, 6-second YouTube video by Medford Medical Solutions LLC showing just what this looks like:

 

 

 

 

Mindray V Flow Real-Time Ultrasound Blood-Flow Imaging

 

 

That’s a real-time blood flow and it’s the kind of high-performance, image-processing speed you can only achieve using programmable logic.

 

The Resona 7 system provides many advanced, ultrasound-imaging capabilities in addition to V Flow. Because of this broad capability spectrum, doctors are able to use the Resona series of medical ultrasound imaging machines in radiology applications—including abdominal imaging and imaging of small organs and blood vessels; vascular hemodymnamics evaluation; and obstetrics/gynecology applications including fetal CNS (central nervous system) imaging. (The fetal brain undergoes major developmental changes throughout pregnancy.) Resona 7 systems are also used for clinical medical research.

 

 

Mindray Fetal 3D Image v2.jpg

 

Fetal 3D image generated by a Mindray Resona 7 Premium Ultrasound Imaging System

 

 

 

Since its founding, the company has continuously explored ways to improve diagnostic confidence in ultrasound imaging. The recently developed ZST+ collects the company’s latest imaging advances into one series of imaging systems. However, ZST+ is not “finished.” Mindray is constantly improving the component ZST+ technologies, having just released version 2.0 of the Resona 7’s operating software and that continuous improvement effort explains why Mindray selected Xilinx All Programmable technology in the form of a Kintex-7 FPGA, which permits the revision and enhancement of existing real-time features and the addition of new features through what is effectively a software upgrade. Because of this, Mindray calls ZST+ a “living technology” and believes that the Kintex-7 FPGA is the core of this living technology.

 

 

Free Webinar on “Any Media Over Any Network: Streaming and Recording Design Solutions.” July 18

by Xilinx Employee ‎07-11-2017 11:21 AM - edited ‎07-11-2017 12:44 PM (2,024 Views)

 

On July 18 (that’s one week from today), Xilinx’s Video Systems Architect Alex Luccisano will be presenting a free 1-hour Webinar on streaming media titled “Any Media Over Any Network: Streaming and Recording Solution.” He’ll be discussing key factors such as audio/video codecs, bit rates, formats, and resolutions in the development of OTT (over-the-top) and VOD (video-on-demand) boxes and live-streaming equipment. Alex will also be discussing the Xilinx Zynq UltraScale+ MPSoC EV device family, which incorporates a hardened, multi-stream AVC/HEVC simultaneous encode/decode block that supports UHD-4Kp60. That’s the kind of integration you need to develop highly differentiated pro AV and broadcast products (and any other streaming-media or recording products) that stand well above the competition.

 

Register here.

 

SoundAI MicA Development Kit for Far-field Speech-Recognition Systems: Powered by Xilinx Spartan-6 FPGA

by Xilinx Employee ‎07-11-2017 09:18 AM - edited ‎07-12-2017 10:49 AM (2,108 Views)

 

Voice control is hot. Witness Amazon Echo and Google Home. These products work because they’re designed to recognize the spoken word from a distance—far-field speech recognition. It’s a useful capability in a wide range of consumer, medical, and industrial applications and SoundAI now has a kit you can use far-field speech recognition to differentiate your next system design whether it’s a smart speaker; an in-vehicle, speech-based control system; a voice-controlled IoT or IIoT device; or some other never-seen-before device. The SoundAI 60C MicA Development Kit employs FPGA-accelerated machine learning and FPGA-based signal processing to implement advanced audio noise suppression, de-reverberation, echo cancellation, direction-of-arrival detection, and beamforming. The FPGA acceleration is performed by a Xilinx Spartan-6 SLX4 FPGA. (There’s also an available version built into a smart speaker.)

 

 

 

SoundAI MicA Development Kit for Far-Field Speech Recognition.jpg

 

SoundAI 60C MicA Development Kit for Far-Field Speech Recognition

 

 

The SoundAI MicA Development Kit’s circular circuit board measures 3.15 inches (80mm) in diameter and incorporates 7 MEMS microphones and 32 LEDs in addition to the Spartan-6 FPGA. According to SoundAI, the kit can capture voice from as far as 5m away, detect commands embedded in the 360-degree ambient sound, localize the voice to within ±10°, and deliver clean audio to the speech-recognition engine (Alexa for English and SoundAI for Chinese).

 

 

 

Xilinx is starting a Vivado Expert Webinar Series to help you improve your design productivity and the first one, devoted to achieving timing closure in high-speed designs, takes place on July 26. Balachander Krishnamurthy—a Senior Product Marketing Manager for Static Timing Analysis, Constraints and Pin Planning—will present the material and will provide insight into Vivado high-speed timing-closure techniques along with some helpful guidelines.

 

Register here.

 

 

 

 

 

By Adam Taylor

 

With the Vivado design for the Lepton thermal imaging IR camera built and the breakout board connected to the Arty Z7 dev board, the next step is to update the software so that we can receive and display images. To do this, we can also use the HDMI-out example software application as this correctly configures the board’s VDMA output. We just need to remove the test-pattern generation function and write our own FLIR control and output function as a replacement.

 

This function must do the following:

 

 

  1. Configure the I2C and SPI peripherals using the XIICPS and XSPI API’s provided when we generated the BSP. To ensure that we can communicate with the Lepton Camera, we need to set the I2C address to 0x2A and configure the SPI for CPOL=1, CPHA=1, and master operation.
  2. Once we can communicate over the I2C interface to determine that the Lepton camera module is ready, we need to read the status register. If the camera is correctly configured and ready when we read this register, the Lepton camera will respond with 0x06.
  3. With the camera module ready, we can read out an image and store it within memory. To do this we execute several SPI reads.
  4. Having captured the image, we can move the stored image into the memory location being accessed by VDMA to display the image.

 

 

To successfully read out an image from the Lepton camera, we need to synchronize the VoSPI output to find the start of the first line in the image. The camera outputs each line as a 160-byte block (Lepton 2) or two 160-byte blocks (Lepton 3), and each block has a 2-byte ID and a 2-byte CRC. We can use this ID to capture the image, identify valid frames, and store them within the image store.

 

Performing steps 3 and 4 allows us to increase the size of the displayed image on the screen. The Lepton 2 camera used for this example has a resolution of only 80 horizontal pixels by 60 vertical pixels. This image would be very small when displayed on a monitor, so we can easily scale the image to 640x480 pixels by outputting each pixel and line eight times. This scaling produces a larger image that’s easier to recognize on the screen although may look a little blocky.

 

However, scaling alone will not present the best image quality as we have not configured the Lepton camera module to optimize its output. To get the best quality image from the camera module, we need to use the I2C command interface to enable parameters such as AGC (automatic gain control), which affects the contrast and quality of the output image, and flat-field correction to remove pixel-to-pixel variation.

 

To write or read back the camera module’s settings, we need to create a data structure as shown below and write that structure into the camera module. If we are reading back the settings, we can then perform an I2C read to read back the parameters. Each 16-bit access requires two 8-bit commands:

 

  • Write to the command word at address 0x00 0x04.
  • Generate the command-word data formed from the Module ID, Command ID, Type, and Protection bit. This word informs the camera module which element of the camera we wish to address and if we wish to read, write, or execute the command.
  • Write the number of words to be read or written to the data-length register at address 0x00 0x06.
  • Write the number of data words to addresses 0x00 0x08 to 0x00 0x26.

 

This sequence allows us to configure the Lepton camera so that we get the best performance. When I executed the updated program, I could see the image that appears below, of myself taking a picture of the screen on the monitor screen. The image has been scaled up by a factor of 8.  

 

 

Image1.jpg 

 

 

Now that we have this image on the screen, I want to integrate this design with MiniZed dev board and configure the camera to transfer images over a wireless network.

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

MicroZed Chronicles Second Year.jpg 

 

 

 

 

 

 

YouTube teardown and repair videos are one way to uncover previously unknown applications of Xilinx components. Today I found a new-this-week video teardown and repair of a non-operational Agilent (now Keysight) 53152A 46GHz Microwave Frequency Counter that uncovers a pair of vintage Xilinx parts: an XC3042A FPGA (with 144 CLBs!) and an XC9572 CPLD with 72 macrocells. Xilinx introduced the XC3000 FPGA family in 1987 and the XC9500 CPLD family appeared a few years later, so these are pretty vintage examples of early programmable logic devices from Xilinx—still doing their job in an instrument that Agilent introduced in 2001. That’s a long-lived product!

 

Looking at the pcb, I’d say that the XC3042A FPGA implements a significant portion of the microwave counter’s instrumentation logic and the XC9572 CPLD connects all of the LSI components to the adjacent microprocessor. (These days, I could easily see replacing the board’s entire upper-left quadrant’s worth of ICs with one Zynq SoC. Less board space, far more microprocessor and programmable-logic performance.)

 

 

Agilent 53152A Microwave Frequency Counter Main Board with Xilinx FPGA and CPLD.jpg 

 

Agilent 53152A Microwave Frequency Counter main board with vintage Xilinx FPGA and CPLD

(seen in the upper left)

 

 

 

A quick look at the Keysight Web site shows that the 53152A counter is still available and lists for $19,386. If you look at it through my Xilinx eyeglasses, that’s a pretty good multiplier for a couple of Xilinx parts that were designed twenty to thirty years ago. The 42-minute video was made by YouTube video makers Shahriar and Shayan Shahramian, who call their Patreon-supported channel “The Signal Path.” In this video, Shahriar manages to repair this 53152A counter that he bought for about $650—so he’s doing pretty well too. (Spoiler alert: The problem's not with the Xilinx devices, they still work fine.)

 

I really enjoy watching well-made repair videos of high-end equipment and always learn a trick or two. This video by The Signal Path is indeed well made and takes its time explaining each step and why they’re performed. Other than telling you that the Xilinx parts are not the problem, I’m not going to give the plot away (other than to say, as usual, that the butler did it).  

 

 

Here’s the video:

 

 

 

 

 

I’m sure you realize that Xilinx continues to sell FPGAs—otherwise, you wouldn’t be on this blog page—although today’s FPGAs are a lot more advanced with many hundreds of thousands or millions of logic cells. But perhaps you didn’t realize that Xilinx is still in the CPLD business. If that’s a surprise to you, I recommend that you read this: “Shockingly Cool Again: Low-power Xilinx CoolRunner-II CPLDs get new product brief.” Xilinx CoolRunner-II CPLDs aren't offered in the 72-macrocell size, but you can get them with as many as 384 macrocells if you wish.

 

 

 

Freelance documentary cameraman, editor, and producer/director Johnnie Behiri has just published a terrific YouTube video interview with Sebastian Pichelhofer, acting Project Leader of Apertus’ Zynq-based AXIOM Beta open-source 4K video camera project. (See below for more Xcell Daily blog posts about the AXIOM open-source 4K video camera.) This video is remarkable in the amount of valuable information packed into its brief, 20-minute duration. This video is part of Behiri’s cinema5D Web site and there’s a companion article here.

 

First, Sebastian explains the concept behind the project: develop a camera with features in demand, with development funded by a crowd-funding campaign. Share the complete, open-source design with community members so they can hack it, improve it, and give these improvements and modifications back to the community.

 

A significant piece of news: Sebastian says that the legendary Magic Lantern team (a group dedicated to adding substantial enhancements to the video and imaging capabilities of Canon dSLR cameras, is now on board as the project’s color-science experts. As a result, says Sebastian, the camera will be able to feature push-button selection of different “film stocks.” Film selection was one way for filmmakers to control the “look” of a film, back in the days when they used film. These days, camera companies devote a lot of effort into developing their own “film” look, but the AXIOM Beta project wants flexibility in this area, as in all other areas. I think Sebastian’s discussion of camera color science from end to end is excellent and worth watching just by itself.

 

I also appreciated Sebastian’s very interesting discussion of the challenges associated with a crowd-funded, open-source project like the AXIOM Beta. The heart of the AXIOM Beta camera’s electronic package is a Zynq SoC on an Avnet MicroZed SOM and that design choice strongly supports the project team’s desire to be able to quickly incorporate the latest innovations and design changes into systems in the manufacturing process. Here's a photo captured from the YouTube interview:

 

 

 

AXIOM Beta Interview Screen Capture 1.jpg 

 

 

 

At 14:45 in the video, Sebastian attempts to provide an explanation of the FPGA-based video pipeline’s advantages in the AXIOM Beta 4K camera—to the non-technical Behiri (and his mother). It’s not easy to contrast the sequential processing of microprocessor-based image and video processing with the same processing on highly parallel programmable logic when talking to a non-engineering audience, especially on the fly in a video interview, but Sebastian makes a valiant effort. By the way, the image-processing pipeline’s design is also open-source and Sebastian suggests that some brave souls may well want to develop improvements.

 

At the end of the interview, there are some video clips captured by a working AXIOM prototype. Of course, they are cat videos. How appropriate for YouTube! The videos are nearly monochrome (gray cats) and shot wide open so there’s a very shallow depth of field, but they still look very good to me for prototype footage. (There are additional video clips including HDR clips here on Apertus’ Web site.)

 

 

 

Here’s the cinema5D video interview:

 

 

 

 

 

 

Additional Xcell Daily posts about the AXIOM Beta open-source video camera project:

 

 

 

 

 

 

reVISION Cobot logo.jpg

In a free Webinar taking place on July 12, Xilinx experts will present a new design approach that unleashes the immense processing power of FPGAs using the Xilinx reVISION stack including hardware-tuned OpenCV libraries, a familiar C/C++ development environment, and readily available hardware-development platforms to develop advanced vision applications based on complex, accelerated vision-processing algorithms such as dense optical flow. Even though the algorithms are advanced, power consumption is held to just a few watts thanks to Xilinx’s All Programmable silicon.

 

Register here.

 

 

By Adam Taylor

 

Over this blog series, I have written a lot about how we can use the Zynq SoC in our designs. We have looked at a range of different applications and especially at embedded vision. However, some systems use a pure FPGA approach to embedded vision, as opposed to an SoC like the members in the Zynq family, so in this blog we are going to look at how we can get a simple HDMI input-and-output video-processing system using the Artix-7 XC7A200T FPGA on the Nexys Video Artix-7 FPGA Trainer Board. (The Artix-7 A200T is the largest member of the Artix-7 FPGA device family.)

 

Here’s a photo of my Nexys Video Artix-7 FPGA Trainer Board:

 

 

 

Image1.jpg

 

Nexys Video Artix-7 FPGA Trainer Board

 

 

 

For those not familiar with it, the Nexys Video Trainer Board is intended for teaching and prototyping video and vision applications. As such, it comes with the following I/O and peripheral interfaces designed to support video reception, processing, and generation/output:

 

 

  • HDMI Input
  • HDMI Output
  • Display Port Output
  • Ethernet
  • UART
  • USB Host
  • 512 MB of DDR SDRAM
  • Line In / Mic In / Headphone Out / Line Out
  • FMC

 

 

To create a simple image-processing pipeline, we need to implement the following architecture:

 

 

 

Image2.jpg 

 

 

The supervising processor (in this case, a Xilinx MicroBlaze soft-core RISC processor implemented in the Artix-7 FPGA) monitors communications with the user interface and configures the image-processing pipeline as required for the application. In this simple architecture, data received over the HDMI input is converted from its parallel format of Video Data, HSync and VSync into an AXI Streaming (AXIS) format. We want to convert the data into an AXIS format because the Vivado Design Suite provides several image-processing IP blocks that use this data format. Being able to support AXIS interfaces is also important if we want to create our own image-processing functions using Vivado High Level Synthesis (HLS).

 

The MicroBlaze processor needs to be able to support the following peripherals:

 

 

  • AXI UART – Enables communication and control of the system
  • AXI Timer Enables the MicroBlaze to time events

  • MicroBlaze Debugging Module – Enables the debugging of the MicroBlaze

  • MicroBlaze Local Memory – Connected to DLMB and ILMB (Data & Instruction Local Memory Bus)

 

We’ll use the memory interface generator to create a DDR interface to the board’s SDRAM. This interface and the SDRAM creates a common frame store accessible to both the image-processing pipeline and the supervising processor using an AXI interconnect.

 

Creating a simple image-processing pipeline requires the use of the following IP blocks:

 

 

  • DVI2RGB – HDMI input IP provided by Digilent
  • RGB2DVI – HDMI output IP provided by Digilent
  • Video In to AXI4-Stream – Converts a parallel-video input to AXI Streaming protocol (Vivado IP)
  • AXI4-Stream to Video Out – Converts an AXI-Stream-to-Parallel-video output (Vivado IP)
  • Video Timing Controller Input – Detects the incoming video parameters (Vivado IP)
  • Video Timing Controller Output – Generates the output video timing parameters (Vivado IP)
  • Video Direct Memory Access – Enables images to be written to and from the DDR SDRAM

 

 

The core of this video-processing chain is the VDMA, which we use to move the image into the DDR memory.

 

 

Image3.jpg 

 

 

 

The diagram above demonstrates how the IP block converts from streamed data to memory-mapped data for the read and write channels. Both VDMA channels provide the ability to convert between streaming and memory-mapped data as required. The write channel supports Stream-to-Memory-Mapped conversion while the read channel provides Memory-Mapped-to-Stream conversion.

 

When all this is put together in Vivado to create the initial base system, we get the architecture below, which is provided by the Nexys Video HDMI example.

 

 

Image4.jpg 

 

 

 

All that is required now is to look at the software required to configure the image-processing pipeline. I will explain that next time.

 

 

 

Code is available on Github as always.

 

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Adam Taylor Special Edition.jpg

 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

MicroZed Chronicles Second Year.jpg 

 

 

 

To paraphrase Douglas Adams’ Hitchhikers Guide to the Galaxy: “400GE is fast. Really fast. You just won't believe how vastly, hugely, mind-bogglingly fast it is.”

 

Xilinx, Microsoft/Azure Networking, and Juniper held a 400GE panel at OFC 2017 that explored the realities of the 400GE ecosystems, deployment models and why the time for 400GE has arrived. The half-hour video below is from OFC 2017. Xilinx’s Mark Gustlin discusses the history of Ethernet from 10Mbps in the 1980s to today’s 400GE, including an explanation lower-speed variants and why they exist. It also provides technical explanations for why the 400GE IEEE technical specs look the way they do and what 400GE optical modules will look like as they evolve. Microsoft/Azure Networking’s Brad Booth describes what he expects Azure’s multi-campus, data-center networking architecture to look like in 2019 and how he expects 400GE to fit into that architecture. Finally, Juniper’s David Ofelt discusses how the 400GE development model has flipped: the hyperscale developers and system vendors are now driving the evolution and the carriers are following their lead. He also touches on the technical issues that have held up 400GE development and what happens when we max out on optical module density (we’re almost there).

 

 

 

 

 

 

For more information about 400GE in Xcell Daily, see:

 

 

 

 

 

 

 

 

 

 

 

 

 

Xilinx announced the addition of the P416 network programming language for SDN applications to its SDNet Development Environment for high-speed (1Gbps to 100Gbps) packet processing back in May. (See “The P4 has landed: SDNet 2017.1 gets P4-to-FPGA compilation capability for 100Gbps data-plane packet processing.”) An OFC 2017 panel session in March—presented by Xilinx, Barefoot Networks, Netcope Technologies, and MoSys—discussed the adoption of P4, the emergent high-level language for packet processing, and early implementations of P4 for FPGA and ASIC targets. Here’s a half-hour video of that panel discussion.

 

 

 

 

For more information, you might also want to take a look at the Xilinx P4-SDNet Translator User Guide and the SDNet Packet Processor User Guide, which was just updated recently.

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.