UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

 

National Instruments (NI) has just added two members to its growing family of USRP RIO SDRs (software-defined radios)—the USRP-2944 and USRP-2945—with the widest frequency ranges, highest bandwidth, and best RF performance in the family. The USRP-2945 features a two-stage superheterodyne architecture that achieves superior selectivity and sensitivity required for applications such as spectrum analysis and monitoring, and signals intelligence. With four receiver channels, and the capability to share local oscillators, this SDR also sets new industry price/performance benchmarks for direction-finding applications. The USRP-2944 is a 2x2 MIMO-capable SDR that features 160MHz of bandwidth per channel and a frequency range of 10 MHz to 6 GHz. This SDR operates in bands well suited to LTE and WiFi research and exploration.

 

 

NI USRP.jpg

 

NI USRP RIO Platform

 

 

Like all of its USRP RIO products, the NI USRP-2944 and USRP-2945 incorporate Xilinx Kintex-7 FPGAs for local, real-time signal processing. The Kintex-7 FPGA implements a reconfigurable LabVIEW FPGA target that incorporates DSP48 coprocessing for high-rate, low-latency applications. With the company’s LabVIEW unified design flow, researchers can create prototype designs faster and significantly shorten the time needed to achieve results.

 

Here’s a block diagram showing the NI USRP RIO SDR architecture:

 

 

NI USRP RIO Block Diagram.jpg

 

 

USRP RIO Block Diagram

 

 

 

 

Adam Taylor just published an EETimes review of the Xilinx RFSoC, announced earlier this week. (See “Game-Changing RFSoCs from Xilinx”.) Taylor has a lot of experience with high-speed analog converters: he’s designed systems based on them—so his perspective is that of a system designer who has used these types of devices and knows where the potholes are—and he’s worked for a semiconductor company that made them—so he should know what to look for with a deep, device-level perspective.

 

Here’s the capsulized summary of his comments in EETimes:

 

 

“The ADCs are sampled at 4 Gsps (gigasamples per second), while the DACs are sampled at 6.4 Gsps, all of which provides the ability to work across a very wide frequency range. The main benefit of this, of course, is a much simpler RF front end, which reduces not only PCB footprint and the BOM cost but -- more crucially -- the development time taken to implement a new system.”

 

 

“…these devices offer many advantages beyond the simpler RF front end and reduced system power that comes from such a tightly-coupled solution.”

 

 

“These devices also bring with them a simpler clocking scheme, both at the device-level and the system-level, ensuring clock distribution while maintaining low phase noise / jitter between the reference clock and the ADCs and DACs, which can be a significant challenge.”

 

 

“These RFSoCs will also simplify the PCB layout and stack, removing the need for careful segregation of high-speed digital signals from the very sensitive RF front-end.”

 

 

Taylor concludes:

 

 

“I, for one, am very excited to learn more about RFSoCs and I cannot wait to get my hands on one.”

 

 

For more information about the new Xilinx RFSoC, see “Xilinx announces RFSoC with 4Gsamples/sec ADCs and 6.4Gsamples/sec DACs for 5G, other apps. When we say ‘All Programmable,’ we mean it!” and “The New All Programmable RFSoC—and now the video.”

 

Adam Taylor wants you to know how to prevent your FPGA-based projects from going astray

by Xilinx Employee ‎02-23-2017 11:31 AM - edited ‎02-23-2017 11:33 AM (112 Views)

 

Adam Taylor has published nearly 200 blogs in Xcell Daily but he’s reserved some of his best advice for embedded.com. Yesterday, he published a short article titled: “How to prevent FPGA-based projects from going astray.” In this article, Taylor describes five common issues that lead design teams astray:

 

  • Not having a stable requirements baseline when starting costs you time
  • Have a development plan that every team member understands
  • Verification takes longer than design, always
  • Lack of design reviews leads to pain
  • Reuse as much IP as you can

 

Learn from the best. Spend five minutes and read Adam’s new article.

 

 

 

 

 

If you’re still uncertain as to what System View’s Visual System Integrator hardware/software co-development tool for Xilinx FPGAs and Zynq SoCs does, the following 3-minute video should make it crystal clear. Visual System Integrator extends the Xilinx Vivado Design Suite and makes it a system-design tool for a wide variety of embedded systems based on Xilinx devices.

 

This short video demonstrates System View’s tool being used for a Zynq-controlled robotic arm:

 

 

 

 

 

For more information about System View’s Visual System Integrator hardware/software co-development tool, see:

 

 

 

 

 

Avnet’s new $499 UltraZed PCIe I/O carrier card for its UltraZed-EG SoM (System on Module)—based on the Xilinx Zynq UltraScale+ MPSoC—gives you easy access to the SoM’s 180 user I/O pins, 26 MIO pins from the Zynq MPSoC’s MIO, and 4 GTR transceivers from the Zynq MPSoC’s PS (Processor System) through the PCIe x1 edge connector; two Digilent PMOD connectors; an FMC LPC connector; USB and microUSB, SATA, DisplayPort, and RJ45 connectors; an LVDS touch-panel interface; a SYSMON header; pushbutton switches; and LEDs.

 

 

Avnet UltraZed PCIe IO Carrier Card Image.jpg

 

 

$499 UltraZed PCIe I/O Carrier Card for the UltraZed-EG SoM

 

 

That’s a lot of connectivity to track in your head, so here’s a block diagram of the UltraZed PCIe I/O carrier card:

 

 

Avnet UltraZed PCIe IO Carrier Card.jpg

 

UltraZed PCIe I/O Carrier Card Block Diagram

 

 

 

For information on the Avnet UltraZed SOM, see “Look! Up in the sky! Is it a board? Is it a kit? It’s… UltraZed! The Zynq UltraScale+ MPSoC Starter Kit from Avnet” and “Avnet UltraZed-EG SOM based on 16nm Zynq UltraScale+ MPSoC: $599.” Also, see Adam Taylor’s MicroZed Chronicles about the UltraZed:

 

 

 

 

 

 

 

The New All Programmable RFSoC—and now the video

by Xilinx Employee on ‎02-22-2017 03:44 PM (328 Views)

 

Yesterday, Xilinx announced breakthrough RF converter technology that allows the creation of an RFSoC with multi-Gsamples/sec DACs and ADCs on the same piece of TSMC 16nm FinFET silicon as the digital programmable-logic circuitry, the microprocessors, and the digital I/O. This capability transforms the Zynq UltraScale+ MPSoC into an RFSoC that's ideal for implementing 5G and other advanced RF system designs. (See “Xilinx announces RFSoC with 4Gsamples/sec ADCs and 6.4Gsamples/sec DACs for 5G, other apps. When we say ‘All Programmable,’ we mean it!” for more information about that announcement.)

 

Today there’s a 4-minute video with Sr. Staff Technical Marketing Engineer Anthony Collins providing more details including an actual look at the performance of a 16nm test chip with the 12-bit, 4Gsamples/sec ADC and the 14-bit, 6.4Gsamples/sec DAC in operation.

 

Here’s the video:

 

 

 

 

 

To learn more about the All Programmable RFSoC architecture, click here or contact your friendly, neighborhood Xilinx sales representative.

 

 

 

 

By Adam Taylor

 

 

A few weeks ago we looked at how we can generate PWM signals using the Zynq SoC’s TTC (Triple Time Counter). PWM is very useful for interfacing with motor drives and for communications. What we did not look at however was how we can measure the PWM signals received by the Zynq SoC.

 

 

Image1.jpg

 

 

The Zynq SoC’s TTC (Triple Timer Counter)

 

 

We can do this using the TTC’s event counters These 16-bit counters are clocked by CPU_1x and are capable of measuring the time an input signal spends high or low. This input signal can come from either MIO or EMIO pins for the first TTC in both TTC 0 and 1 or from EMIO pins in the reaming two timers in each of the TTCs.

 

The event timer is very simple to use, once you enable and configure it to measure either the high or low duration of the pulse. The time updates the event count regsiter once the high or low level it has been configured to measure completes.

 

With a 133MHz CPU_1x clock, this 16-bit register can measure events as long as 492 microseconds before it overflows.

 

If the event counter does overflow and the event timer is not configured to handle this situation, the event timer will disable itself. If we have enabled overflow, the counter will roll over and continue counting while generating a event-roll-over interrupt. We can use this capability to count longer events by counting the number of times the event rolls over before arriving at the final value.

 

While using one event timer allows us to measure the time a signal is high or low, we can use two event timers to measure both the high and low times for the same signal: one configured to measured the high time and another to measure the low time.

 

To use the TTC to monitor an event, we need to ensure the TTC is enabled on the MIO configuration tab of the Zynq customization dialog:

 

 

Image2.jpg

 

 

 

To measure an external signal, we need to configure the TTC to use an external signal. We do this on the Clock Congifuration tab of the Zynq customization dialog:

 

 

Image3.jpg

 

 

Enabling this external source on the Zynq processing system block diagram provides input ports that we can connect to the external signal we wish to monitor. In this case I have connected both event timer inputs to the same external signal to monitor the signal’s high and low durations.

 

 

Image4.jpg

 

 

When I implemented the design targeting an Avnet ZedBoard, I broke the wave outputs and clock inputs out to the board’s PMOD connector A.

 

To get the software up and running I used the Servo example that we generated earlier as a base. To use the event timers, we need to set the enable bit the Event Control Timer register. Within this register, we can enable the event timer, set the appropriate signal level, and enable overflow if desired.

 

The TTC files provided with the BSP do not provide functions to configure or use the event timers within the TTC. However, interacting with them is straightforward. We can use the Xil_Out16 and Xil_In16 functions to configure the event timer and to read the timer value.

 

To enable the TTC0 timers zero and one to count opposite events, we can use the commands shown below:

 

 

Xil_Out16(0xF800106C,0x3);

Xil_Out16(0xF8001070,0x1);

 

 

Once enabled, we can then read the TTC event timers. In the case of this example, we use the code snippet below:

 

 

event = Xil_In16(0xF8001078);

event = Xil_In16(0xF800107C);

 

 

These commands read the event timer value.

 

When I put this all together and stimulated the external input using a 5KHz signal with a range of duty cycles, I could correctly determine the signal’s high and low times.

 

For example, with a 70 % duty cycle the event timer recorded a time of 15556 for the high duration and time of 6667 for a low duration of the pulse. There are 22222 CPU_1x clock cycles in a 5KHz signal. The measurement captured in the event registers total 22224 CPU_1x clock cycles or a frequency of 4999.6 Hz with the correct duty cycles for the signal received.

 

To ensure the most accurate conversion of clock counts into actual time measurements, we can use the #define XPAR_PS7_CORTEXA9_0_CPU_CLK_FREQ_HZ 666666687 definition provided within xparameters.h. This is either 4 or 6 times the frequency of CPU_1x.

 

These event timers can prove very useful in our systems, especially if we are interfacing with sensors that provide PWM outputs such as some temperature and pressure sensors.

 

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 MicroZed Chronicles Second Year.jpg

 

 

 

 

 

 

Xilinx has just introduced a totally new technology for high-speed RF designs: an integrated RF-processing subsystem consisting of RF-class ADCs and DACs implemented on the same piece of 16nm UltraScale+ silicon along with the digital programmable-logic, microprocessor, and I/O circuits. This technology transforms the All Programmable Zynq UltraScale+ MPSoC into an RFSoC. The technology’s high-performance, direct-RF sampling simplifies the design of all sorts of RF systems while cutting power consumption, reducing the system’s form factor, and improving accuracy—driving every critical, system-level figure of merit in the right direction.

 

The fundamental converter technology behind this announcement was recently discussed in two ISSCC 2017 papers by Xilinx authors: “A 13b 4GS/s Digitally Assisted Dynamic 3-Stage Asynchronous Pipelined-SAR ADC” and “A 330mW 14b 6.8GS/s Dual-Mode RF DAC in 16nm FinFET Achieving -70.8dBc ACPR in a 20MHz Channel at 5.2GHz.” (You can download a PDF copy of those two papers here.)

 

This advanced RF converter technology vastly extends the company’s engineering developments that put high-speed, on-chip analog processing onto Xilinx All Programmable devices starting with the 1Msamples/sec XADC converters introduced on All Programmable 7 series devices way back in 2012. However, these new 16nm RFSoC converters are much, much faster—by more than three orders of magnitude. Per today’s technology announcement, the RFSoC’s integrated 12-bit ADC achieves 4Gsamples/sec and the integrated 14-bit DAC achieves 6.4Gsamples/sec, which places Xilinx RFSoC technology squarely into the arena for 5G direct-RF design as well as millimeter-wave backhaul, radar, and EW applications.

 

Here’s a block diagram of the RFSoC’s integrated RF subsystem:

 

 

RFSoC RF subsystem.jpg

 

Xilinx Zynq UltraScale+ RFSoC RF Subsystem

 

 

In addition to the analog converters, the RF Data Converter subsystem includes mixers, a numerically controlled oscillator (NCO), decimation/interpolation, and other DSP blocks dedicated to each channel. The RF subsystem can handle real and complex signals, required for IQ processing. The analog converters achieve high sample rates, large dynamic range, and the resolution required for 5G radio-head and backhaul applications. In some cases, the integrated digital down-conversion (DDC) built into the RF subsystem requires no additional FPGA resources.

 

The end result is breakthrough integration. The analog-to-digital signal chain, in particular, is supported by a hardened DSP subsystem for flexible configuration by the analog designer. This leads to a 50-75% reduction in system power and system footprint, along with the needed flexibility to adapt to evolving specifications and network topologies.

 

Where does that system-power reduction come from? The integration of both the digital and analog-conversion electronics on one piece of silicon eliminates a lot of power-hungry I/O and takes the analog converters down to the 16nm FinFET realm. Here’s a power-reduction table from the backgrounder with three MIMO radio example systems:

 

 

RFSoC MIMO System Power Savings Table v3 .jpg
 

 

 

How about the form-factor reduction? Here’s a graphical example:

 

 

RFSoC Footprint Reduction.jpg

 

 

 

You save the pcb space needed by the converters and you save the space required to route all of the length-matched, serpentine pcb I/O traces between the converters and the digital SoCs. All of that I/O connectivity and the length matching now takes place on-chip.

 

To learn more about the All Programmable RFSoC architecture, click here or contact your friendly, neighborhood Xilinx sales representative.

 

 

Note: When we say “All Programmable” we mean it.

 

 

After five years and a dozen prototypes, the Haddington Dynamics development team behind Dexter—a $3K, trainable, 5-axis robotic arm kit for personal manufacturing—launched the project on Kickstarter just yesterday and are already 41.6% of the way to meeting the overall $100K project funding goal with 28 days left in the funding period. Dexter is designed to be a personal robot arm with the ability to make a wide variety of goods. Think of Dexter as your personal robotic factory with additive (2.5D/3D printing) and subtractive (drilling and milling) capabilities.

 

Dexter incorporates a 6-channel motor controller but the arm itself uses five stepper motors for positioning. Adding a gripper or other end-effector to the end of the arm adds a 6th degree of freedom.

 

 

Dexter Robotic Arm.jpg

 

Dexter Robotic Arm 3D CAD Drawing

 

 

 

You need some hefty, high-performance computation to precisely coordinate five axes of motion and the current Dexter prototype employs programmable logic in the form of a Xilinx Zynq Z-7000 SoC on an Avnet MicroZed dev board for this task. (The Kickstarter page even shows an IP block diagram from the Vivado Design Suite.)

 

The Dexter team calls the Zynq SoC an FPGA supercomputer:

 

“By using a(n) FPGA supercomputer to solve the precision control problem, we were able to optimize the physical and electrical architecture of the robot to minimize the mass and therefore the power requirements. All 5 of the stepper motors are placed at strategic locations to lower the center of mass and to statically balance the arm. This way almost all of the torque of the motors is used to move the payload not the robot.”

 

The prototype design achieves 50-micron repeatability!

 

Here’s a video of the prototype Dexter robotic arm in development, including a shot of the robotic arm threading a needle:

 

 

 

 

There are several more videos on the Dexter Kickstarter page.

 

 

 

 

 

By Adam Taylor

 

We have now built a basic Zynq UltraScale+ MPSoC hardware design for the UltraZed board in Vivado that got us up and running. We’ve also started to develop software for the cores within the Zynq UltraScale+ MPSoC’s PS (processor system). The logical next step is to generate a simple “hello world” program, which is exactly what we are going to do for one of the cores in the Zynq UltraScale+ MPSoC’s APU (Application Processing Unit).

 

As with the Zynq Z-7000 SoC, we need three elements to create a simple bare-metal program for the Zynq UltraScale+ MPSoC:

 

  • Hardware Platform Definition – This defines the underlying hardware platform configuration, address spaces, and IP modules within the design.
  • Board Support Package – This uses the hardware platform to create a hardware abstraction layer (HAL) that provides the necessary drivers for the IP within the system. We need those drivers to use these hardware resources in an application.
  • Application – This is the application we will be writing. In this case it will be a simple “hello world” program.

 

 

To create a new hardware platform definition, select:

 

 

File-> New -> Other -> Xilinx – Hardware Platform Specification

 

 

Provide a project name and select the hardware definition file, which was exported from Vivado. You can find the exported file within the SDK directory if you exported it local to the project.

 

 

Image1.jpg

 

Creating the Hardware platform

 

 

Once the hardware platform has been created within SDK, you will see the hardware definition file opens within the file viewer. Browsing through this file, you will see the address ranges of the Zynq UltraScale+ MPSoC’s ARM Cotex-A53 and Cortex-R5 processors and PMU (Performance Monitor Unit) cores within the design. A list of all IP within the processors’ address space appears at the very bottom of the file.

 

 

Image2.jpg

 

 Hardware Platform Specification in SDK file browser

 

 

We then use the information provided within the hardware platform to create a BSP for our application. We create a new application by selecting:

 

 

File -> New -> Board Support Package

 

 

Within the create BSP dialog, we can select the processor this BSP will support, the compiler to be used, and the selected OS, In this case, we’ll use bare metal or FreeRTOS.

 

For this first example, we will be running the “hello world” program from the APU on processor core 0. We must be sure to target the same core as we create the BSP and application if everything is to function correctly.

 

 

 

Image3.jpg 

 Board Support Package Generation

 

 

With the BSP created, the next step is to create the application using this BSP. We can create the application in a similar manner to the BSP and hardware platform:

 

 

File -> New -> Application Project

 

 

This command opens a dialog that allows us to name the project, select the BSP, specify the processor core, and select operating system. On the first tab of the dialog, configure these settings for APU core 0, bare metal, and the BSP just created. On the second tab of the dialog box, select the pre-existing “hello world” application.  

 

 

Image4.jpg

 

Configuring the application

 

 

Image5.jpg

 

Selecting the Hello World Application

 

 

At this point, we have the application ready to run on the UltraZed dev board. We can run the application using either the debugger within SDK or we can boot the device from a non-volatile memory such as an SD card.

 

To boot from an SD Card, we need to first create a first-stage boot loader (FSBL). To do this, we follow the same process as we do when creating a new application. The FSBL will be based on the current hardware platform but it will have its own BSP with several specific libraries enabled.

 

 

Select File -> New -> Application Project

 

 

Enter a project name and select the core and OS to support the current build as previously done for the “hello world” application. Click the “Create New” radio button for the BSP and then on the next page, select the Zynq MP FSBL template.

 

 

 

Image6.jpg

 

Configuring the FSBL application

 

 

 

Image7.jpg

 

 Selecting the FSBL template

 

 

With the FSBL created, we now need to build all our applications to create the required ELF files for the FSBL and the application. If SDK is set to build automatically, these files will have been created following the creation of the FSBL. If not, then select:

 

 

Project -> Build All

 

 

Once this process completes, the final step is to create a boot file. The Zynq UltraScale+ MPSoC boots from a file named boot.bin, created by SDK. This file contains the FSBL, FPGA programming file, and the applications. We can create this file by hand and indeed later in this series we will be doing so to examine the more advanced options. However, for the time being we can create a boot.bin by right-clicking on the “hello world” application and selecting the “Create Boot Image” option.

 

 

 

Image8.jpg 

 Creating the boot image from the file, from the hello world application

 

 

 

This will populate the “create boot image” dialog correctly with the FSBL, FPGA bit file, and our application—provided the elf files are available.

 

 

Image9.jpg 

 

Boot Image Creation Dialog correctly populated

 

 

Once the boot file is created, copy the boot.bin onto a microSD card and insert it into the SD card holder on the UltraZed IOCC (I/O Carrier Card). The final step, before we apply power, is to set SW2 on the UltraZed card to boot from the SD Card. The setting for this is 1 = ON, 2 = OFF, 3 = ON, and 4 = OFF. Now switch on the power on, connect to a terminal window, and you will see the program start and execute.

 

When I booted this on my UltraZed and IOCC combination, the following appeared in my terminal window:

 

 

Image10.jpg 

 

Hello World Running

 

 

Next week we will look a little more at the architecture of the Zynq UltraScale+ MPSoC’s PS.

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 

MicroZed Chronicles hardcopy.jpg

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

Innovative Integration’s XA-160M PCIe XMC module suits applications that require high-speed data acquisition and real-time signal processing. The module provides two 16-bit TI ADC16DV160 160Msamples/sec ADCs for high-speed analog input signals and an Analog Devices AD9122 16-bit dual DAC capable of operating at 1200Msamples/sec for driving high-speed analog outputs. The ADCs and DACs are coupled to a Xilinx Artix-7 XC7A200T FPGA (that’s the largest member of the Artix-7 device family with 215,360 logic cells and 740 DSP48 slices). The FPGA also manages 1Gbyte of on-board DDR3 SDRAM and implements PCIe Gen2 x4 and Aurora interfaces for the XMC module’s P15 and P16 connectors respectively. You can use the uncommitted programmable logic on the Artic-7 FPGA for high-speed, real-time signal processing.

 

 

 

Innovative Integration XA-160M XMC Module.jpg

 

Innovative Integration’s XA-160M PCIe XMC Module

 

 

 

Here’s a block diagram that makes everything clear:

 

 

Innovative Integration XA-160M XMC Module Block Diagram.jpg

 

 

Innovative Integration’s XA-160M PCIe XMC Module Block Diagram

 

 

 

Innovative Integrations suggest that you might want to consider using the XA-160M PCIe XMC Module for:

 

 

  • Stimulus/response measurements
  • High-speed servo controls
  • Arbitrary Waveform Generation
  • RADAR
  • LIDAR
  • Optical Servo
  • Medical Scanning

 

 

You can customize the logic implemented in the Artix-7 FPGA using VHDL and MathWorks’ MATLAB using the company’s FrameWork Logic toolset. The MATLAB BSP supports real-time, hardware-in-the- loop development using the graphical Simulink environment along with the Xilinx System Generator for DSP, which is part of the Xilinx Vivado Design Suite.

 

Please contact Innovative Integrations directly for more information about the XA-160M PCIe XMC Module.

 

 

The steep attenuation of 28Gbps signals on pcbs cause problems for every system designer trying to deal with high-speed serial interfaces. Samtec’s new brochure titled “Samtec Products Supporting Xilinx Virtex UltraScale+ FPGA VCU118 Development Kit” sums up the problem in just one graphic:

 

 

High-Speed Signal Attenuation.jpg 

 

 

The best you can hope for with a pcb (the dark blue line) is about half a db of attenuation per trace inch for a pcb manufactured using PTFE laminates, which were originally developed for microwave and RF applications. Up there at the top of the graph, showing far less attenuation, you see Samtec’s solution: its Twinax FireFly interconnect system, which is designed to carry these high-speed data signals within a system. (For essentially no loss, there’s a compatible optical FireFly module, shown for comparison at the very top of the graph.)

 

This next table, also taken from the Samtec brochure, sharpens the picture even more:

 

 

 

FireFly Performance Table.jpg 

 

 

As the table shows, if you are dealing with 28Gbps transceiver signals (and soon perhaps 56Gbps) and you need to go more than a very few inches, then you’ll want to consider the Samtec FireFly Micro Flyover system.

 

An easy way to experiment with the Samtec FireFly Micro Twinax interconnect system is to use the company’s FMC+ Active Loopback card, which is included with the Xilinx VCU118 Virtex UltraScale+ FPGA Dev Kit. Here’s a photo of that card plugged into the FMC port of the VCU118 Dev Kit:

 

 

 

Xilinx VCU118 Dev Kit for UltraScale Plus with Samtec FireFly Micro Flyover FMC Card.jpg

 

Xilinx VCU118 Virtex UltraScale+ FPGA Development Kit with Samtec FireFly Micro Flyover FMC Card

 

 

 

Note that one end of the FireFly Twinax cable is connected to this FMC card and the other end plugs into a FireFly connector already present on the Xilinx VCU118 Dev Kit’s FPGA board.

 

Samtec has a new 12-page FireFly Application Design Guide that provides more details.

 

 

For previous Xcell Daily blog posts about the Samtec FireFly interconnect system, see:

 

 

 

The Xilinx Zynq Z-7000 SoC makes a great high-performance embedded-Linux platform and if you’re interested in getting started down that path, there’s a free 1-hour Webinar for you on February 28. Senior Member of Technical Staff Simon Goda of Doulos will be presenting the Webinar.

 

Note: This Webinar will be presented twice to be more convenient for time zones around the world.

 

Register here.

 

Curtis-Wright Defense Solutions’ customer had a problem. The customer needed to develop a large radar system based on a “large number” of DSP modules interconnected with a high-speed switch. The question was, would the switch provide enough bandwidth to support the project requirements? The estimated development cost to answer this question was between $200K and $400K. The estimated time required to answer this question was four to eight weeks.

 

An expensive question, not to mention the potential project delay while this question was answered.

 

Curtiss-Wright has developed a “light-integration” capability to answer such questions for its customers, which it documented in an article titled “Meeting Performance Benchmarks Using Light System Integration for Radar Applications” and further described in a new article appearing on the Military Embedded Systems Web site titled “Reducing costs and risk while speeding development of signal processing system development.”

 

Getting the answer to this question involved prototyping the target radar system using six Curtiss-Wright CHAMP-AV9 DSP Processor modules interconnected through a Curtiss-Wright VPX6-1958 SBC with a dual-port 40GbE Switch along with some of the company’s CHAMP-WB (Wide-Band) OpenVPX Virtex-7 FPGA modules (with two FMC sites per module) connected to the processor modules via PCIe Gen3, and FMC I/O cards plugged onto the FMC sites on the FPGA modules. The FMC I/O cards provide sensor data directly to the FPGA modules at speed for initial high-speed processing.

 

 

 

 

Curtiss-Wright CHAMP-WB Virtex-7 FPGA Module.jpg 

 

CHAMP-WB (Wide-Band) OpenVPX Virtex-7 FPGA module

 

 

The light-integration team worked with the customer to develop a set of system requirements and then assembled a system, configured it, and developed the necessary firmware to run the following tests:

 

  1. Measure the bandwidth for communication among all processing nodes through the 40GbE switch on the SBC.
  2. Demonstrate PCIe DMA at the desired transfer rate between the Virtex-7 FPGA module and one of the processor modules.
  3. Demonstrate 1GbE connectivity throughout the system.
  4. Demonstrate SATA inter-connectivity between this system and a 3rd-party drive.

 

 

This proof of connectivity developed by the Curtiss-Wright light-integration team validated that the overall system design could accommodate the expected system data throughput and gave the customer sufficient confidence to proceed with algorithm development based on the hardware used for these tests. The hardware used in the system-level connectivity demonstration was able to provide sufficient processing for the algorithms used in the radar-system design. Algorithm development is the real added value for radar-system integrators and prime contractors.

 

 

Red Pitaya has been offering its namesake, Zynq-SoC-based, open instrumentation platform as a packaged STEMlab with analog and digital probes, power supply and enclosure. The STEMlab prices range from €249.00 to €499.00 depending on options. To support this hardware, Red Pitaya is distributing apps and the latest turns the STEMlab into a combo 40MHz digital oscilloscope and 50MHz signal generator.

 

Here are the scope specs:

 

 

Red Pitaya STEMlab Scope Specs.jpg

 

 

Red Pitaya STEMlab DSO Specs

 

 

 

And here are the signal generator specs:

 

 

 

Red Pitaya STEMlab Signal Generator Specs.jpg

 

 

Red Pitaya STEMlab Signal Generator Specs

 

 

 

For more articles about the Zynq-based Red Pitaya, see:

 

 

 

The Koheron SDK and Linux distribution, based on Ubuntu 16.04, allows you to prototype working instruments for the Red Pitaya Open Instrumentation Platform, which is based on a Xilinx Zynq All Programmable SoC. The Koheron SDK outputs a configuration bitstream for the Zynq SoC along with the requisite Linux drivers, ready to run under the Koheron Linux Distribution. You build the FPGA part of the Zynq SoC design by writing the code in Verilog using the Xilinx Vivado Design Suite and assembling modules using TCL.

 

The Koheron Web site already includes several instrumentation examples based on the Red Pitaya including an ADC/DAC exerciser, a pulse generator, an oscilloscope, and a spectrum analyzer. The Koheron blog page documents several of these designs along with many experiments designed to be conducted using the Red Pitaya board. If you’re into Python as a development environment, there’s a Koheron Python library as well.

 

There’s also a quick-start page on the Koheron site if you’re in a hurry.

 

 

 

Red Pitaya Open Instrumentation Platform small.jpg 

 The Red Pitaya Open Instrumentation Platform

 

 

 

For more articles about the Zynq-based Red Pitaya, see:

 

 

 

 

Last year, I wrote about a new graphical system-level design tool called Visual System Integrator that lets you “graphically describe complete, heterogeneous, high-performance, systems based on ‘Platforms’ built from processors and Xilinx All Programmable devices.” (See “Visual System Integrator enables rapid system development and integration using processors and Xilinx FPGAs.”) I always thought that definition was a bit too abstract and now there’s a short 2.5-minute video that makes the abstract a bit more concrete:

 

 

 

 

There’s an even shorter companion video that demonstrates the tool being used to create a 10GbE inline packet processing system using a Xilinx Virtex-7 FPGA as a hardware accelerator for an x86 microprocessor:

 

 

 

 

 

In total, you need only five minutes to get a good overview of this relatively new development tool.

 

 

 

By Adam Taylor

 

Having introduced the concept of AMP and the OpenAMP frame work in a previous blog (See Adam Taylor’s MicroZed Chronicles, Part 169: OpenAMP and Asymmetric Processing, Part 1), the easiest way to get OpenAMP up and running is to use one of the examples provided. Once we have the example up and running, we can then add our own application. Getting the example up and running before we develop our own application allows us to pipe-clean the development process.

 

To create the example, we are going to use PetaLinux SDK to correctly configure the PetaLinux image. We will be running PetaLinux on the Zynq SoC. As such, this blog will also serve as a good tutorial for building and using PetaLinux.

 

The first thing we need to do is download the PetaLinux SDK and install it on our Linux machine, if you do not have one of these, that’s not a problem. You can use a virtual machine, which is exactly what I am doing. Installing PetaLinx is very simple. You can download the installer from here and the installation guide and all of the supporting documentation is available here.

 

Once we have installed PetaLinux to get this example going, we are going to follow the UG1186 example and run the “echo world” example. To do this, we need to create a new PetaLinux project and then update it to support Open AMP.

 

When we create a new PetaLinux project, we need to provide the project a BSP (Board Support Package) reference. We can of course create our own BSP, however as I am going to be using the ZedBoard to demonstrate OpenAMP, I downloaded the ZedBoard BSP provided with the PetaLinux Installation files.

 

To create the new project, open a terminal window and enter the following:

 

 

petalinux-create --type project --name <desired name> -- source <path to BSP>

 

 

This command creates the project that we can then customize and build. After you create the project, change your directory to the project directory in the terminal window.

 

The examples we are going to be using are provided within the PetaLinux project. If you open a File Explorer window, you will find these examples under <project>/components/apps/. We plan to create new code, so this is where will be adding new applications as well.

 

To run these applications, we need to first make some changes to PetaLinux. The first change is to configure the kernel base address. The bare-metal application and the Linux application need to use different base addresses. The bare-metal application has a fixed base address of 0, so we need to offset the Linux base address. To do this within the terminal window in the project directory type the following command:

 

 

petalinux-config

 

 

This command opens a dialog window that allows us to configure the Linux base address. Using the arrow keys, navigate to Subsytem AUTO Hardware Settings ---> Memory Settings and set the kernel base address to 0x10000000 as shown below. Remember to save the configuration and then navigate from the root menu to U-boot Configuration and set the netboot offset to 0x11000000. Again save the configuration before exiting and returning to the root menu.

 

 

Image1.jpg 

Setting the Kernel Base Address

 

 

 

Image2.jpg

 

Setting the NetBoot Offset

 

 

The next step is to configure the kernel to ensure that it contains the remote processor and can load modules and user firmware. To do this, enter the following command in the terminal window:

 

 

 

petalinux-config -c kernel

 

 

 

This will again open a configuration dialog within which we need to set the following:

 

 

Image3.jpg

 

Make sure Enable Loadable Module Support is set  

 

 

 

 

Image4.jpg

 

Under Device Drivers --->Generic Drivers ---> set Userspace firmware loading support

 

 

Image5.jpg

 

Under Device Drivers --->Remoteproc drivers ensure support for the Zynq remoteproc is set

 

 

We also need to correctly configure the kernel memory space between user and kernel. Navigate to kernel features ---> Memory split (<current setting> user/kernel split) ---> and set it to 2G/2G:

 

 

Image6.jpg

 

Setting the User /Kernel Split

 

 

The final stage of configuring the kernel is to support high memory support:

 

 

Image7.jpg

 

Setting the high memory support under Kernel Features

 

 

This completes the changes we need to make to the kernel. Save the changes and exit the dialog.

 

The next thing we need to configure the Root File System to include the example applications. To do this, enter the following to bring up the rootfs dialog:

 

 

petalinux-config -c rootfs

 

 

Under Apps, ensure that echo_test, mat_mul_demo, and proxy_app are enabled:

 

 

 

Image8.jpg

 

Enabling the Apps

 

 

We also need to enable the OpenAMP drivers modules. These are under the Modules:

 

 

Image9.jpg 

Enabling the OpenAMP drivers

 

 

With these changes made, save the changes and exit the dialog. We are nearly now almost ready to create our new PetaLinux build. But first we need to update the device tree to support OpenAMP. To do this, use File Explorer to navigate to:

 

 

<project>/subsystems/linux/configs/device-tree

 

 

Within this directory, you will find a number of files including system_top.dts and openamp-overlay.dtsi. Open the system_top.dts file and tell it to include the openamp-overlay.dtsi file by adding the line:

 

 

/include/ “openamp-overlay.dtsi”

 

 

 

Image10.jpg 

 

System_top.dts configured for OpenAMP

 

 

 

Now we are finally ready to build PetaLinux. In the terminal window within the project directory, type:

 

 

 

petalinux-build

 

 

 

This will create the following folders within your project directory using the built image files <project>/images/linux.

 

Within this file you will see a range of files. However, the key files moving forward are:

 

 

  • bit – the bit file for the PL
  • elf – the first stage boot loader
  • ub – A combined file which contains the kernel, device tree and rootfs

 

 

For this example I want to boot from the SD card so I need to create a boot.bin file. Looking in the directory you will notice one has not been provided. However, all the files we need to create one have been included.

 

To create a boot.bin file, enter the following using the terminal:

 

 

petalinux-package --boot --fsbl images/linux/zynq_fsbl.elf --fpga images/linux/download.bit --uboot

 

 

This command creates the boot.bin file that we will put onto our SD card along with the image.ub. Once we have these files on an SD card, connect it to the ZedBoard and boot the board.

 

Using a serial terminal connected to the ZedBoard, we can now run our example and see that it is running ok. Once the example boots, you need to log in. The username and password are both “root.”

 

Enter the following command in the terminal to run the example:

 

modprobe zynq_remoteproc firmware=image_echo_test

 

 

 

 

 Image11.jpg

 

Results of running the first command

 

 

 

After the channel is created, you may need to press return to bring up the command prompt again. Now enter the following command:

 

modprobe rpmsg_user_dev_driver

 

 

Image12.jpg  

Result of running the second command

 

 

Now it is time to run the application itself. Enter the following:

 

 

echo_test

 

 

Image13.jpg

 

Start of the Echo Test application

 

 

Select option 1 and the test will run to completion. You can exit the application by selecting 2:

 

 

Image14.jpg

 

Successful test of the example application

 

 

 

At this point, we have successfully demonstrated that OpenAMP is running on the Zynq SoC. (We will look at the Zynq UltraScale+ MPSoC in due course.)

 

Now we are ready to start developing our own applications using OpenAMP.

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 MicroZed Chronicles Second Year.jpg

 

 

 

Danger 100GbE.jpgAs the BittWare video below explains, CPUs are simply not able to process 100GbE packet traffic without hardware acceleration. BittWare’s new Streamsleuth, to be formally unveiled at next week’s RSA Conference in San Francisco (Booth S312), adroitly handles blazingly fast packet streams thanks to a hardware assist from an FPGA. And as the subhead in the title slide of the video presentation says, StreamSleuth lets you program its FPGA-based packet-processing engine “without the hassle of FPGA programming.”

 

(Translation: you don’t need Verilog or VHDL proficiency to get this box working for you. You get all of the FPGA’s high-performance goodness without the bother.)

 

 

 

BittWare StreamSleuth.jpg 

 

 

That said, as BittWare’s Network Products VP & GM Craig Lund explains, this is not an appliance that comes out of the box ready to roll. You need (and want) to customize it. You might want to add packet filters, for example. You might want to actively monitor the traffic. And you definitely want the StreamSleuth to do everything at wire-line speeds, which it can. “But one thing you do not have to do, says Lund, “is learn how to program an FPGA.” You still get the performance benefits of FPGA technology—without the hassle. That means that a much wider group of network and data-center engineers can take advantage of BittWare’s StreamSleuth.

 

As Lund explains, “100GbE is a different creature” than prior, slower versions of Ethernet. Servers cannot directly deal with 100GbE traffic and “that’s not going to change any time soon.” The “network pipes” are now getting bigger than the server’s internal “I/O pipes.” This much traffic entering a server this fast clogs the pipes and also causes “cache thrash” in the CPU’s L3 cache.

 

Sounds bad, doesn’t it?

 

What you want is to reduce the network traffic of interest down to something a server can look at. To do that, you need filtering. Lots of filtering. Lots of sophisticated filtering. More sophisticated filtering than what’s available in today’s commodity switches and firewall appliances. Ideally, you want a complete implementation of the standard BPF/pcap filter language running at line rate on something really fast, like a packet engine implemented in a highly parallel FPGA.

 

The same thing holds true for attack mitigation at 100Gbe line rates. Commodity switching hardware isn’t going to do this for you at 100GbE (10GbE yes but 100GbE, “no way”) and you can’t do it in software at these line rates. “The solution is FPGAs” says Lund, and BittWare’s StreamSleuth with FPGA-based packet processing gets you there now.

 

Software-based defenses cannot withstand Denial of Service (DoS) attacks at 100GbE line rates. FPGA-accelerated packet processing can.

 

So what’s that FPGA inside of the BittWare Streamsleuth doing? It comes preconfigured for packet filtering, load balancing, and routing. (“That’s a Terabit router in there.”) To go beyond these capabilities, you use the BPF/pcap language to program your requirements into the the StreamSleuth’s 100GbE packet processor using a GUI or APIs. That packet processor is implemented with a Xilinx Virtex UltraScale+ VU9P FPGA.

 

Here’s what the guts of the BittWare StreamSleuth look like:

 

 

BittWare StreamSleuth Exploded Diagram.jpg 

 

And here’s a block diagram of the StreamSleuth’s packet processor:

 

 

 

BittWare StreamSleuth Packet Processor Block Diagram.jpg 

 

 

The Virtex UltraScale+ FPGA resides on a BittWare XUPP3R PCIe board. If that rings a bell, perhaps you read about that board here in Xcell Daily last November. (See “BittWare’s UltraScale+ XUPP3R board and Atomic Rules IP run Intel’s DPDK over PCIe Gen3 x16 @ 150Gbps.”)

 

Finally, here’s the just-released BittWare StreamSleuth video with detailed use models and explanations:

 

 

 

 

 

 

 

 

For more information about the StreamSleuth, contact BittWare directly or go see the company’s StreamSleuth demo at next week’s RSA conference. For more information about the packet-processing capabilities of Xilinx All Programmable devices, click here. And for information about the new Xilinx Reconfigurable Acceleration Stack, click here.

 

 

 

 

 

 

Annapolis Microsystems has adopted the Xilinx Zynq UltraScale+ MPSoC in a big way today by introducing three 6U and 3U OpenVPX boards and one PCIe board based on three of the Zynq UltraScale+ MPSoC family members. The four new COTS boards are:

 

 

 

Annapolis Microsystems WILDSTAR UltraKVP ZP 3PE for 6U OpenVPX.jpg

Annapolis Microsystems WILDSTAR UltraKVP ZP 3PE for 6U OpenVPX

 

 

 

 

 

 

Annapolis Microsystems WILDSTAR UltraKVP ZP 2PE for 6U OpenVPX.jpg

Annapolis Microsystems WILDSTAR UltraKVP ZP 2PE for 6U OpenVPX

 

 

 

 

 

 

 

Annapolis Microsystems WILDSTAR UltraKVP ZP for 3U OpenVPX.jpg

Annapolis Microsystems WILDSTAR UltraKVP ZP for 3U OpenVPX

 

 

 

 

 

 

 

WILDSTAR UltraKVP ZP for PCIe.jpg

Annapolis Microsystems WILDSTAR UltraKVP ZP for PCIe

 

 

Why choose the Xilinx Zynq UltraScale+ MPSoC? Noah Donaldson, Annapolis Micro Systems VP of Product Development explains: “Here at Annapolis we pride ourselves on being first to integrate the latest, cutting-edge components into our FPGA boards, all in pursuit of one goal: Delivering the highest-performing COTS boards and systems that have been proven in some of the most challenging environments on earth.”

 

That pretty much says it all, doesn’t it?

 

 

 

What are people doing with the Amazon Web Services FPGA-based F1 services? Quite a lot.

by Xilinx Employee ‎02-09-2017 11:31 AM - edited ‎02-09-2017 12:07 PM (1,921 Views)

 

Amazon Web Services (AWS) rolled out the F1 instance for cloud application development based on Xilinx Virtex UltraScale+ Plus VU0P FPGAs last November. (See “Amazon picks Xilinx UltraScale+ FPGAs to accelerate AWS, launches F1 instance with 8x VU9P FPGAs per instance.) It appears from the following LinkedIn post that people are using it already to do some pretty interesting things:

 

 

AWS F1 Neural Net application.jpg 

 

 

If you’re interested in Cloud computing applications based on the rather significant capabilities of Xilinx-based hardware application acceleration, check out the Xilinx Acceleration Zone.

 

Last September, Xilinx announced the six members of the 28nm Spartan-7 FPGA family for “cost-sensitive” designs (that’s marketing-speak for “low-cost”) and for designs that require small-footprint devices. (The two smallest members of the Spartan-7 family will be offered in 8x8mm CPGA196 packages with 100 user I/O pins.)

 

 

Spartan-7 FPGA Family Table v2.jpg

 

 

 

There’s a new 15-minute video with a quick technical overview of the Spartan-7 family:

 

 

 

 

And you can download the 50-page Advance Product Specification here.

 

 

By Adam Taylor

 

Having introduced the Zynq UltraScale+ MPSoC last week, this week it is time to look at the Avnet UltraZed-EG SOM and its carrier card and to start building our first “hello world” program.

 

Like is MicroZed and PicoZed predecessors, the UltraZed-EG is a System on Module (SOM) that contains all of the necessary support functions for a complete embedded processing system. As a SOM, this module is designed to be integrated with an application-specific carrier card. In this instance, our application-specific card is the Avnet UltraZed IO Carrier Card.

 

The specific Zynq UltraScale+ MPSoC contained within the UltraZed SOM is the XCZU3EG-SFVA625, which incorporates a quad-core ARM Cortex-A53 APU (Application Processing Unit), dual ARM Cortex-R5 processors in an RPU (Real-Time Processing Unit), and an ARM Mali-400 GPU. Coupled with a very high performance programmable-logic array based on the Xilinx UltraScale+ FPGA fabric, suffice it to say that exploring how to best use all of these resources it will keep us very, very busy. You can find the 36-page product specification for the device here.

 

The UltraZed SOM itself shown in the diagram below provides us with 2GBytes of DDR4 SDRAM, while non-volatile storage for our application(s) is provided by both dual QSPI or eMMC Flash memory. Most of the Zynq UltraScale+ MPSoC’s PS and PL I/O are broken out to one of three headers to provide maximum flexibility on the application-specific carrier card.

 

 

Avnet UltraZed Block Diagram.jpg

 

 

Avnet UltraZed-EG SOM Block Diagram

 

 

 

The UltraZed IO Carrier Card (IOCC), breaks out the I/O pins from the SOM to a wide variety of interface and interconnect technologies including Gigabit Ethernet, USB 2/3, UART, PMOD, Display Port, SATA, and Ardunio Shield. This diverse set of I/O connections give us wide lattitude in developing all sorts of systems. The IOCC also provicdes a USB to JTAG interface allowing us to program and debug our system. You’ll find more information on the IOCC here.

 

Having introduced the UltraZed and its IOCC, it is time to write a simple “hello world” application and to generate our first Zynq UltraScale+ MPSoC design.

 

The first step on this journey is make sure we have used the provided voucher to generate a license and downloaded the Design Edition of the Vivado Design Suite.

 

The next step is to install the board files to provide Vivado with the necessary information to create designs targeting the UltraZed SoM. You can download these files using this link. These board-definition files include information such as the actual Zynq UltraScale+ MPSoC device populated on the SoM, connections to the PS on the IOCC, and a preset configuration for the SoM. We can of course create an example without using these files, however it requires a lot more work.

 

Once you have downloaded the zip file, extract the contents into the following directory:

 

 

<Vivado Install Root>/data/boards/boardfiles

 

 

When this is complete, you will see that the UltraZed board defintions are now in the directory and we can now use them within our design.

 

 

Image2.jpg 

 

 

I should point out at this point that some of the UltraZed boards (including mine) use ES1 silicon. To alert Vivado about this, we need to create a init.tcl file in the scripts directory that will enable us to use ES1 silicon. Doing so is very simple. Within the directory:

 

<Vivado root>/scripts

 

Create a file called init.tcl. Enter the line “enable_beta_device*” into this file to enable the use of ES1 silicon within your toolchain.

 

 

Image3.jpg 

 

 

 

With this completed we can open Vivado and create a new RTL project. After entering the project name and location, click next on the add sources, IP, and constraints tabs. This should bring you to part selection tab. Click on boards and you should see our UltraZed IOCC board. Select that board and then finish the open project dialog.

 

 

Image4.jpg

 

 

 

This will create a new project.

 

For this project I am just going to just use the Zynq UltraScale+ MPSoC’s PS to print “hello world.” I usually like to do this with new boards to ensure that I have pipe-cleaned the tool chain. To do this, we need a hardware-definition file to export to SDK to define the hardware platform.

 

The first step in this sequence is within Flow Navigator. On the left-hand side of the Vivado screen, select the Create Block Diagram option. This will provide a dialog box allowing you to name your block design (or you can leave it default). Click OK and this will create a blank block diagram (in the example below mine is called design_1).

 

 

Image5.jpg 

 

 

 

Within this block diagram, we need to add an MPSoC system. Click on the “add IP” button as indicated in the block diagram. This will bring up an IP dialog. Within the search box, type in “MPSoC” and you will see the Zynq UltraScale+ MPSoC IP block. Double click on this and it will be added to the diagram automatically.

 

 

Image6.jpg

 

 

 

Once the block has been added, you will notice a designer assistance notification across the top of the block diagram. For the moment, do not click on that. Instead, double click on the MPSoC IP in your block diagram and it will open up the customization screen for the MPSoC, just like any other IP block.

 

 

Image7.jpg

 

 

 

Looking at the customization screen, you will see it is not yet configured for the target board. For instance, the IOU block has no MIO configuration. Had we not downloaded the board definition, we would now have to configure this by manually. But why do that when we can use the shortcut?

 

Image8.jpg 

 

 

We have the board-definition files, so all we need to do to correctly configure this for the IOCC is close the customization dialog and click on the Run Block Automation notification at the top of the block diagram. This will configure the MPSoC for our use on the IOCC. Within the block automation dialog, check to make sure that the “apply pre-sets” option is selected before clicking OK.

 

 

Image9.jpg 

 

 

Re-open the MPSoC IP block again and you will see a different configuration of the MPSoC—one that is ready to use with our IOCC.

 

 

Image10.jpg

 

 

Do not change anything. Close the dialog box. Then, on the block diagram, connect the PL_CLK0 pin to the maxihpm0_lpd_ack pin. Once that is complete, click on “validate” to ensure that the design has no errors.

 

 

Image11.jpg 

 

 

 

The next step is very simple. We’ll create an RTL wrapper file for the block diagram. This will allow us to implement the design. Under the sources tab, right-click on the block diagram and select “create HDL wrapper.” When prompted, select the option that allows Vivado to manage the file for you and click OK.

 

 

Image12.jpg

 

 

 

To generate the bitstream, click on the “Generate Bitstream” icon on the menu bar. If you are prompted about any stages being out of date, re-run them first by clicking on “yes.”

 

 

Image13.jpg

 

 

 

Depending on the speed of your system, this step may take a few minutes or longer to generate the bitstream. Once completed, select the “open implementation” option. Having the implementation open allows us to export the hardware definition file to SDK where we will develop our software.

 

 

Image14.jpg

 

 

 

To export the hardware definition, select File-> Export->Export Hardware. Select “include bit file” and export it.

 

 

Image15.jpg

 

 

 

To those familiar with the original Zynq SoC, all of this should look pretty familiar.

 

We are now ready to write our first software program—next time.

 

 

You can find links to previous editions of the MPSoC edition here

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 MicroZed Chronicles Second Year.jpg

 

 

Accolade’s newly announced ATLAS-1000 Fully Integrated 1U OEM Application Acceleration Platform pairs a Xilinx Kintex UltraScale KU060 FPGA on its motherboard with an Intel x86 processor on a COM Express module to create a network-security application accelerator. The ATLAS-1000 platform integrates Accolade’s APP (Advanced Packet Processor), instantiated in the Kintex UltraScale FPGA, which delivers acceleration features for line-rate packet processing including lossless packet capture, nanosecond-precision packet timestamping, packet merging, packet filtering, flow classification, and packet steering. The platform accepts four 10G SFP+ or two 40G QSFP pluggable optical modules. Although the ATLAS-1000 is designed as a flow-through security platform, especially for bump-in-the-wire applications, there’s also 1Tbyte worth of on-board local SSD storage.

 

 

 Accolade ATLAS-1000.jpg

 

Accolade Technology's ATLAS-1000 Fully Integrated 1U OEM Application Acceleration Platform

 

 

 

Here’s a block diagram of the ATLAS-1000 platform:

 

 

ATLAS-1000 Platform Block Diagram.jpg 

 

All network traffic enters the FPGA-based APP for packet processing. Packet data is then selectively forwarded to the x86 CPU COM Express module depending on the defined application policy.

 

 

Please contact Accolade Technology directly for more information about the ATLAS-1000.

 

 

 

A new LinkedIn post written by Iain Mosely, Founder of Converter Technology, and titled “Using the Xilinx Zynq SoC for Real Time Control of Power Electronics” details the author’s early experience in developing PWM controllers for power electronics converters using the Zynq SoC’s programmable logic. There are several reasons to note this post in Xcell Daily.

 

First, Mosely credits Xcell Daily author Adam Taylor for the help he’s provided through 169 installments of the MicroZed Chronicles (so far):

 

“Learning any new technology is great fun and challenging at the same time. We found the MicroZED Chronicles by Adam Taylor to be an excellent resource to help us get up and running and I really recommend looking at these if you want to learn about the Zynq technology.”

 

Then, Mosely gets to the heart of the matter:

 

“So, if you are used to the world of micro-controllers then a good way to think about the Zynq parts is to consider them as a micro-controller and FPGA on the same piece of Silicon. In fact the Zynq we used contains a dual-core ARM cortex A9 processor system (PS) so is a highly capable device surrounded by a huge amount of programmable logic (PL).”

 

“With the SoC approach used in the Zynq, the user now has the option to create their own custom hardware peripherals within the Zynq chip which can then be controlled by the on-chip ARM cores. What this really means is that the user has flexibility to create their own set of tightly coupled hardware peripherals (e.g. PWM block) using a hardware description language such as Verilog or VHDL. This gives incredible flexibility to the user to define exactly how they want their peripheral to behave and since it is implemented in physical gates on the device, timings can be highly deterministic.”

 

It’s reasonable to ask if this really is an efficient way to engineer a power converter. After all, many microcontrollers have PWM capabilities in their timer/counter peripherals. Mosely has an explanation:

 

“So, you might think that this is an awful lot of effort to go to to control a 30W flyback - and you would be correct! However, imagine the situation whereby you need to control multiple converters running out of phase (e.g. multiphase buck) or two converters in one system (e.g. PFC and downstream). What about multilevel converters for high voltage applications? In these more complex cases it can become increasingly difficult to control everything using a single micro-controller, especially if switching frequencies and loop bandwidths are being pushed to higher levels. Using a microcontroller is very much a 'sequential' approach to the design whereby at a certain point the processor cannot operate fast enough to implement the control algorithm. By offloading aspects of the real time control to the FPGA fabric in a device such as the Zynq, it becomes possible to run many operations in parallel which can bring significant speed advantages, especially in multi-phase systems.”

 

 

 

Note: If you’re not reading the frequent installments of Adam Taylor’s MicroZed Chronicles, you’re missing a lot of good help.

 

 

The Zynq-based Red Pitaya open instrumentation board gives you a programmable platform like an Arduino or a Raspberry Pi, but with the added kick of high-speed ADCs and DACs for analog instrumentation projects. The Red Pitaya organization always intended the Red Pitaya board to be learning tool, and Anton Potočnik at ETH Zürich has started writing a blog series to help you learn how to program the board. So far, he’s published four projects:

 

 

 

Red Pitaya Frequency Counter Block Diagram.jpg

 

 

Red Pitaya Frequency Counter Block Diagram

 

 

The latest blog post is nearly a month old, so let’s hope there’s another soon.

 

For previous Xcell Daily blog posts about the Red Pitaya, see:

 

 

 

 

 

 

The new VC Z series of industrial Smart Cameras from Vision Components incorporate a Xilinx Zynq Z-7010 SoC to give the camera programmable local processing. The VC nano Z camera series is available as a bare-board imaging platform called the VCSBC series or as a fully enclosed camera called the VC series. The VSBC series is available with 752x480-pixels (WVGA), 1280x1024-pixels (SXGA), 1600x1200-pixel, or 2048x1536-pixel sensors. These camera modules acquire video at rates from 50 to 120 frames/sec depending on sensor size. All four of these modules are also available with remote sensor heads as the VCSBC nano Z-RH series to ease system integration. Thanks to the added video-processing horsepower of the Zynq SoC, these modules are also offered in dual-sensor, stereo-imaging versions called the VCSBC nano Z-RH-2 series.

 

 

VCSBC nano Stereo Camera.jpg
 

Vision Components VCSBC nano Z-RH-2 industrial stereo smart camera module

 

 

These same cameras are also available from Vision Components with rugged enclosures and lens mounts as the VC nano Z series and the VC pro Z series. The VC pro Z versions can be equipped with IR LED illumination.

 

 

VC pro Z Enclosed Camera Module.jpg

 

 

Vision Components VC pro Z enclosed industrial smart camera

 

 

 

The ability to create more than a dozen different programmable cameras and camera modules from one platform directly arises from the use of the integrated Xilinx Zynq SoC. The cameras use the Zynq SoC’s dual-core ARM Cortex-A9 MPCore processor to run Linux and to support the extensive programmability made possible by software tools such as Halcon Embedded from MVTech software, which allows you to comfortably develop applications on a PC and then export them to Vision Components’ Smart Cameras. The Zynq SoC’s on-chip programmable logic is able to perform a variety of vision-processing tasks such as white-light interferometry, color conversion, and high-speed image recognition (such as OCR, bar-code reading and license-plate recognition) in real time.

 

These cameras make use of the extensive, standard I/O capabilities in the Zynq SoC including high-speed Ethernet, I2C, and serial I/O while the Zynq SoC’s programmable I/O provides the interfacing flexibility needed to accommodate the four existing image sensors offered in the series or any other sensor that Vision Components might wish to add to the VC Z series of smart cameras in the future. According to Endre J. Tóth, Director of Business Development at Vision Components, these programmable capabilities give his company a real competitive advantage.

 

Here’s a 5-minute video detailing some of the applications you can address with these Smart Cameras from Vision components:

 

 

 

 

Note: For more information about these Smart Cameras, please contact Vision Components directly.

 

 

 

 

 

By Adam Taylor

 

 

As we are going to be looking at both the Zynq Z-7000 SoC and the Zynq UltraScale+ MPSoC in this series moving forward, one important aspect we need to consider is how we can best leverage the processor cores provided within our chosen device. How we use these cores of course, depends upon the system architecture we implement to achieve the overall system requirements. Increasingly, system designers use an asymmetric approach to obtain optimal performance and to address the system-design challenges. Of course, system-design challenges are usually application-specific.

 

At this point, those unfamiliar with the term may find themselves asking what an asymmetric approach is? Simply put, a asymmetric approach is one where different processing elements within a device perform different functions and indeed some may be running different operating systems to achieve that function. One example of this would be one of the two ARM Cortex-A9 processor cores of a Zynq SoC running Linux and handling system communications and other tasks, which do not need to be addressed in real time, while the second processor core runs a bare-metal application or a FreeRTOS application to execute real-time processing tasks and communicating results to the other core.

 

When we implement systems in such a manner, we call this asymmetric multi-processing or AMP. We have looked at AMP before, briefly. However, we did not look at the OpenAMP framework developed by the Multicore Association. This open-source framework is supported by both the Zynq SoC and Zynq UltraScale+ MPSoC and provides the software elements necessary for us to establish an AMP system. As such, it is something we need to be familiar with as we develop our examples going forward.

 

The alternative is a symmetric multi-processing (SMP) system in which all the cores run the same operating system and are balancing the workload among themselves. An example of this would be running Linux on both cores of a Zynq SoC.

 

Creating an AMP system allows us to leverage the parallelism provided by having several processing cores available, i.e. we can set different cores to perform different tasks under the control of a master core. However, AMP does come with challenges such as how inter-process communication is implemented, how resources are shared, process control, and how the life cycle is managed. The OpenAMP framework is designed to address these issues and to enable reuse and portability at the same time.

 

When working with the Zynq SoC and Zynq UltraScale+ MPSoC, we can implement AMP solutions which have the following configuration:

 

  • Linux Master – Bare-Metal remote
  • Linux Master – FreeRTOS remote

 

I should note at this point that while in the Zynq SoC, we can use one core to run Linux as the master, in the Zynq UltraScale+ MPSoC we can use the quad-core APU (based on ARM Cortex-A53 processorsto run Linux while using the dual-core RPU (based on ARM Cortex-R5 processors) as the remote to run the bare-metal or FreeRTOS applications.

 

The master core, running Linux contains most of the OpenAMP framework within the kernel. There are main three components:

 

  • virtIO – Virtualization, which allows communication with the network and device drivers
  • remoteProc – This is the API that controls the remote processor. It starts and stops the execution, allocates resources, and creates the virtIO devices. This API performs what is often termed the Life Cycle Management (LCM) of the remote processor
  • RPMesg – The API that allows inter-process communication between the master and remote processors.

 

The diagram below (taken from UG1186, “OpenAMP Framework for Zynq Devices”) illustrates the process between master and remote processor using OpenAMP.

 

 

 

Image1.jpg

 

Example of OpenAMP flow

 

 

Of course, when we build our bare-metal and FreeRTOS applications, we also need to include the necessary libraries within the BSP to enable these to support OpenAMP. The libraries we need to enable are the OpenAMP library and the libmetal library.

 

Having introduced the OpenAMP concept, next week we will look at how we can get an example up and running on a Zynq device.

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

 

 

 

 

 

SDVoE Logo.jpgThe Pro AV industry’s transition from proprietary audio/video transport means to lower-cost, IP-based solutions is already underway but like any new field, the differing approaches make the situation somewhat chaotic. That’s why 14 leading vendors are launching the SDVoE (Software Defined Video Over Ethernet) Alliance at this week’s ISE 2017 show in Amsterdam (Booth 12-H55).

 

The SDVoE Alliance is a non-profit consortium that’s developing standards to provide “an end-to-end hardware and software platform for AV extension, switching, processing and control through advanced chipset technology, common control APIs and interoperability.” The consortium also plans to create an ecosystem around SDVoE technology.

 

An SDVoE announcement late last week said that fourteen new companies were joining the original six founding member companies (AptoVision, Aquantia, Christie Digital, NETGEAR, Sony, and ZeeVee). The new member companies are:

 

  • DVIGear
  • Grandbeing
  • IDK Corporation
  • Arista
  • Aurora Multimedia
  • HDCVT
  • Techlogix Networx
  • Xilinx

 

 

You might recognize Aquantia’s name on this list from the recent Xcell Daily blog post about the company’s new AQLX107 device, which packs an Ethernet PHY—capable of operating at 10Gbps over 100m of Cat 6a cable (or 5Gbps down to 100Mbps over 100m of Cat 5e cable)—along with a Xilinx Kintex-7 FPGA into one compact package.

 

The connection here is not at all coincidental. The Aquantia  AQLX107 “FPGA-programmable PHY” makes a pretty nice device for implementing SDVoE and in fact, Aquantia and AptoVision announced such an implementation just today. According to this announcement, “Combined with AptoVision’s BlueRiver technology, the AQLX107 can be used to transmit true 4K60 video across off-the-shelf 10G Ethernet networks and standard category cable with zero frame latency.  Audio and video processing, including upscaling, downscaling, and multi-image compositing are all realizable on the SDVoE hardware and software platform made possible by the AQLX107.”

 

The presence of Xilinx on this list of SDVoE Alliance members also should not be surprising. Xilinx has long worked with major Pro AV vendors to meet a wide variety of professional and broadcast-video challenges including any-to-any connectivity, all of the latest video-compression technologies, and video over IP.

 

In fact, Xilinx and its Xilinx Alliance Members will be demonstrating some of the most recent AV innovations and implementations in the Xilinx booth exhibiting at this week’s ISE show including:

 

  • 4K HEVC Real-Time Compression – presented by Xilinx
  • 4K HDMI Over IP Using TICO – presented by intoPIX
  • 4K HDMI Over IP Using VC-2 HQ – presented by Barco Silex
  • 4K Real Time Warp & Image Stitching – presented by Omnitek
  • 8K Real Time Video Processing – presented by Omnitek

 

Check out these demonstrations in booth 14-B132 at the ISE 2017 show.

 

 

 

Earlier this week at Photonics West in San Francisco, Tattile introduced the high-speed, 12Mpixel S12MP Smart Camera based on a Xilinx Zynq Z-7030 SoC. (See “Tattile’s 12Mpixel S12MP industrial GigE Smart Camera captures 300 frames/sec with Zynq SoC processing help.”) However, the S12MP camera is not the company’s first smart camera to be based on Xilinx Zynq SoCs. In fact, the company previously introduced four other color/monochrome/multispectral, C-mount smart cameras based on various Zynq SoC family members:

 

  • The 640x480-pixel, 120 frames/sec S50 Compact Smart Camera series based on a CMOSIS CMV300 image sensor and a single-core Xilinx Zynq Z-7000S SoC.

 

 

Tattile S50 Smart Camera.jpg 

 

Tattile S50 Compact Smart Camera based on a single-core Xilinx Zynq Z-7000S SoC

 

 

 

  • The VGA-to-4Mpixel, 35-250 frames/sec Next-Generation S100 Smart Camera series based on one of three CMOSIS image sensors and a dual-core Xilinx Zynq Z-7000 SoC.

 

 

 

Tattile S100 Smart Camera.jpg 

 

Tattile S100 Compact Smart Camera based on a dual-core Xilinx Zynq Z-7000 SoC

 

 

 

  • The 4.2Mpixel, 180 frames/sec High-Performance S200 Smart Camera series based on a CMOSIS CMV4000 image sensor and a dual-core Xilinx Zynq Z-7000 SoC.

 

  • The Hyperspectral S200 Hyp Smart Camera series based on one of three hyperspectral image sensors and a dual-core Xilinx Zynq Z-7000 SoC.

 

 

Tattile S200 Smart Camera.jpg

 

 

Tattile S200 and S200 Hyp Smart Cameras based on a dual-core Xilinx Zynq Z-7000 SoC

 

 

All of these cameras use the Zynq SoC’s on-chip programmable logic to perform a variety of real-time vision processing. For example, the S50 and S100 Smart Cameras use the on-chip programmable logic for image acquisition and image preprocessing. The S200 Hyp camera uses the programmable logic to also perform reflectance calculations and multispectral image/cube reconstruction. In addition, Tattile is able to make the real-time processing capabilities of the programmable logic available in these cameras to its customers through software including a graphical development tool.

 

The compatible Xilinx Zynq Z-7000 and Z-7000S SoCs give Tattile’s development teams a choice of several devices with a variety of cost/performance/capability ratios while allowing Tattile to develop a unified camera platform on which to base a growing family of programmable smart cameras. The Zynq SoCs’ programmable I/O permits any type of image sensor to be used, including the multispectral line, tiled, and mosaic sensors used in the S200 Hyp series. The same basic controller design can be reused multiple times and the design is future-proof—ready to handle any new sensor type that might be introduced at a later date.

 

That’s exactly that happened with the newly introduced S12MP Smart Camera.

 

 

Please contact Tattile directly for more information about these Smart Cameras.

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.