We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!


A new toy arrived this week from Digilent: the pocket-sized Analog Discovery 2. Actually, I’m purposely being quite unfair. The Analog Discovery 2 is not a toy. It’s a nicely designed USB multifunction analog and digital instrument that combines:



  • A two-channel USB digital oscilloscope (1MΩ, ±25V, differential, 14-bit, 100MS/s, 30MHz+ bandwidth - with the Analog Discovery BNC Adapter Board)
  • A two-channel arbitrary function generator (±5V, 14-bit, 100MS/s, 12MHz+ bandwidth - with the Analog Discovery BNC Adapter Board)
  • A stereo audio amplifier to drive external headphones or speakers with replicated AWG signals
  • A 16-channel digital logic analyzer (3.3V CMOS and 1.8V or 5V tolerant, 100MS/s)
  • A 16-channel digital pattern generator (3.3V CMOS, 100MS/s)
  • 16 channels of virtual digital I/O including buttons, switches, and simulated LED indicators
  • Two input/output digital trigger signals for linking multiple instruments (3.3V CMOS)
  • A digital voltmeter (AC, DC, ±25V)
  • Two programmable power supplies (0…+5V , 0…-5V in 0.1V steps), powered from USB or a wall wart for more current



From these basic instrumentation resources, the Analog Discovery 2’s companion software package called Waveforms 2015 synthesizes some even more complex and very useful instruments including:



  • A network analyzer – Bode, Nyquist, Nichols transfer diagrams of a circuit. Range: 1Hz to 10MHz
  • A spectrum Analyzer – power spectrum and spectral measurements (noise floor, SFDR, SNR, THD, etc.)
  • Digital Bus Analyzers (SPI, I²C, UART, Parallel)



The retail price for this all-in-one instrument is $279, which represents tremendous bang for the buck. You can do a lot of serious work with this product.


Like its close lookalike sibling, the $199.99 Digilent Digital Discovery, and its immediate predecessor, the Digilent Analog Discovery, Digilent’s Analog Discovery 2 derives much of its flexible nature from a Xilinx Spartan-6 FPGA. What’s the difference between the Analog Discovery 2 and the Digital Discovery? Digilent’s Kaitlyn Franz answers this question in two recent blog posts titled “Analog Discovery 2 vs Digital Discovery – A Battle of Logic” and “I Have an Analog Discovery, Do I Need a Digital Discovery?


I’ll let Kaitlyn’s comparison chart from her blog explain it succinctly:



Digilent Analog and Digital Discovery Comparison Chart.jpg 



Thanks to a number of analog ICs from Analog Devices, the Analog Discovery 2 has analog measurement capability, two analog arbitrary waveform generators, and two programmable power supplies. By comparison, the Digital Discovery has more logic-analysis channels and a much faster maximum sample rate of 800Msamples/sec. That both of these products rely on a Xilinx Spartan-6 FPGA for their flexibility and programmability says a lot about basing the products’ design on an FPGA. It also shows that clever engineering can extract a lot of performance from an FPGA that’s considered by many to be a “low-end” part.


Here’s a block diagram of the Digilent Analog Discovery 2, taken from the Reference Manual:




Digilent Analog Discovery 2 Block Diagram v2.jpg 




The Reference Manual also contains detailed descriptions and a theory of operation with schematics for all of the Analog Discovery 2’s sections. That’s so you can see how the Analog Devices’ parts (and the Xilinx Spartan-6 FPGA) were used in the design. Digilent wants this product to be a teaching tool in multiple dimensions.


In an homage to a couple of my favorite engineering YouTubers—EEVblog’’s Dave “Don’t Turn it On, Take it Apart” Jones from Australia and AvE (“Arduino Versus Evil,” and unfortunately I can’t repeat his signature phrase in Xcell Daily) from Canada—I popped the transparent covers off of the Analog Discovery 2 for some close-up photos of the two sides of its pcb:




Digilent Analog Discovery 2 Board and Case.jpg 





Digilent Analog Discovery 2 Board Top.jpg 





Digilent Analog Discovery 2 Board Bottom.jpg




That’s a fairly busy 2-sided load resulting in a very compact product.


I plan to put the Analog Discovery 2 through its paces in the near future using the new Xcell Daily Hardware Lab I’ve set up and I’ll chronicle the results of those experiments in future blog posts.



For more information about the Digilent Analog Discovery 2 in Xcell Daily, see “$279 Analog Discovery 2 DSO, logic analyzer, power supply, etc. relies on Spartan-6 for programmability, flexibility,” For more information about the Digilent Digital Discovery, see “$199.99 Digital Discovery from Digilent implements 800Msample/sec logic analyzer, pattern generator. Powered by Spartan-6” and “Hands On: Testing the $199.99 Digilent Digital Discovery Portable Logic Analyzer (based on a Spartan-6 FPGA).”



For more information about either of these instruments, contact Digilent directly.





Step-by-Step instructions for getting up and running with a Zynq-based Digilent ZYBO trainer board

by Xilinx Employee ‎07-27-2017 04:26 PM - edited ‎07-27-2017 04:29 PM (5,481 Views)


Digilent’s Alex Wong has just published a blog post on RS-online’s DesignSpark with step-by step instructions for getting your first program (“hello world” of course) running on a Digilent ZYBO trainer board, based on a Xilinx Zynq Z7010 SoC. It doesn’t get any simpler than this.



Digilent ZYBO.jpg



Here’s an inspiring short video from National Instruments (NI) where educators from Georgia Tech, the MIT Media Lab, the University of Manchester, and the University of Waterloo discuss using a variety of NI products to inspire students, pique their curiosity, and foster deeper understanding of many complex engineering concepts while thoroughly disguising all of it as fun. Among the NI products shown in this 2.5-minute video are several products based on Xilinx All Programmable devices including:





Here’s the video:







For more information about these Xilinx-based NI products, see:










By Adam Taylor


Connecting the low-cost, Zynq-based Avnet MiniZed dev board connected to our WiFi network allows us to transfer files between the board and our development environment quickly and easily. I will use WinSCP—a free, open-source SFTP client, FTP client, WebDAV client, and SCP client for Windows—to do this because it provides an easy-to-use, graphical method to upload files.


If we have power cycled or reset our MiniZed between enabling the WiFi as in the previous blog and connecting to it using WinSCP, we will need to rerun the WiFi setup script. LED D10 on the MiniZed board will be lit when WiFi is enabled. Once we are connected to the WIFI network, we can use WinSCP to remotely log in. In the example below, the MiniZed had the address of on my network. The username and password to log in are the same as for the log in over the terminal. Both are set to root.





Connecting the MiniZed to the WiFi network



Once we are connected with WinSCP, we can see the file systems on both our host computer and the MiniZed. We can simply drag and drop files between the two file systems to upload or download files. It can’t get much easier than this until we develop mind-reading capabilities for Zynq-based products. What we need now is a simple program we can use to prove the setup.






WinSCP connected and able to upload and download files



To create a simple program, we can use SDK targeting the Zynq SoC’s A9 processor. There is also a “hello world” program template that can use as the basis for our application. Within SDK, create a new project (File ->New->Application Project) as shown in the images below, this will create a simple “hello world” application.









Opening the helloworld.c file within the created application allows you to customize the program if you so desire.


Once you are happy with your customization, your next step is to build the file, which will result in an ELF file. we can then upload this ELF file to the MiniZed using WinSCP and use the terminal to run our first example. Make sure to set the permissions for read, write, and execute when uploading the file to the MiniZed dev board.


Within the terminal window, we can then run the application by executing it using the command:




When I executed this command, I received the following in response that proved everything was working as expected:






Once we have this simple program running successfully, we can create a more complex programs for various applications including ones that use the MiniZed dev board’s WiFi networking capabilities. To do this we need to use sockets, which we will explore in a future blog.


Having gotten the MiniZed board’s WiFi up and running and loading a simple “hello world” program, we now turn our attention to the board’s Bluetooth wireless capability, which we have not yet enabled. We enable Bluetooth networking in a similar manner to WiFi networking. Navigate to /usr/local/bin/ and perform a LS command. In the results, you will see not only the script we used to turn on WiFi (WIFI.sh) but also a script file named BT.sh for turning on Bluetooth. Running this script turns on the Bluetooth. You will see a blue LED D9 illuminate on the MiniZed board when Bluetooth is enabled and within the console window, you will notice that the Bluetooth feature configures and starts scanning. If there is a discoverable Bluetooth device in the area, then you will see it listed. In the example below, you can see my TV.






If we have another device that we wish to communicate with, re-running the same script will cause an issue. Instead, we use the command hcitool scan:







Running this command after making my mobile phone discoverable resulted in my Samsung S6 Edge phone being added to the list of Bluetooth devices.


Now we know how to enable both the WiFi and Bluetooth on the MiniZed board, how to write our own program, and upload it to the MiniZed.



In future blogs, we will look at how we can transfer data using both the Bluetooth and WiFi in our applications.



Code is available on Github as always.




If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.



MicroZed Chronicles hardcopy.jpg 



  • Second Year E Book here
  • Second Year Hardback here


MicroZed Chronicles Second Year.jpg 





Xilinx 7 series FPGAs have 50-pin I/O banks with one common supply voltage for all 50 pins. The smaller Spartan-7 FPGAs have 100 I/O pins in two I/O banks, so it might be convenient in some smaller designs (or even some not-so-small designs) to combine the I/O for configuration and DDR memories into one FPGA I/O bank (plus the dedicated configuration bank 0) if possible so that the remaining I/O bank can operate at a different I/O voltage.


It turns out, you can do this with some MIG (Memory Interface Generator) magic, a little Vivado tool fiddling, and a simple level translator for the Flash memory’s data lines.


Application note XAPP1313 titled “Spartan-7 FPGA Configuration with SPI Flash and Bank 14 at 1.35V” shows you how to do this with a 1.8V Quad SPI Flash memory and 1.35V DDR3L SDRAM. Here’s a simplified diagram of what’s going on:



XAPP1313 Figure 1.jpg 




The advantage here is that you don’t need to move up to a larger FPGA to get another I/O bank.


For step-by-step instructions, see XAPP1313.






If you’re developing FPGA-based designs using the Spartan-6 family and would like to rehost on Windows 10, keep reading. ISE 14.7 now runs on Windows 10. You’ll need to download ISE 14.7 for Spartan-6 devices on Windows 10 using the instructions in this 3-minute video, which walks you through the process:







Adam Taylor’s MicroZed Chronicles, Part 207: Setting up MiniZed WIFI and Bluetooth Connectivity

by Xilinx Employee ‎07-17-2017 10:34 AM - edited ‎07-18-2017 02:54 PM (6,699 Views)


By Adam Taylor


So far on our journey, every Zynq SoC and Zynq UltraScale+ MPSoC we have looked at has had two or more ARM microprocessor cores. However, I recently received the new Avnet MinZed dev board based on a Zynq Z-7007S SoC. This board is really exciting for several reasons. It is the first board we’ve looked at that’s based on a single-core Zynq SoC. (It has one ARM Cortex-A9 processor core that runs as fast as 667MHz in the speed grade used on the board.) And like the snickerdoodle, it comes with support for WIFI and Bluetooth. This is a really interesting board and it sells for a mere $89 in the US.


Xilinx designed the single-core Zynq for cost-optimized and low-power applications. In fact, we have been using just a single core for most of the Zynq-based applications we have looked at over this series unless we have been running Linux, exploring AMP, or looking at OpenAMP. One processor core is still sufficient for many, many applications.


The MiniZed dev board itself comes with 512Mbytes of DDR3L SDRAM, 128Mbits of QSPI flash memory, and 8Gbytes of eMMC flash memory. When it comes to connectivity, in addition to the wireless links, the MiniZed board also provides two PMOD interfaces and an Arduino/ChipKit Shield connector. It also provides an on-board temperature sensor, accelerometer and microphone.


Here’s a block diagram of the MiniZed dev board:







Thanks to its connectivity, its capabilities and low cost make the MiniZed board ideal for a range of applications, especially those applications that fall within the Internet of Things and Industrial Internet of Things domains.


When we first open the box, the MiniZed board comes preinstalled with a PetaLinux image loaded into the QSPI flash memory. This has a slight limitation as the QSPI flash is not large enough to host a PetaLinux image with both a Bluetooth and WIFI stack. Only the WIFI stack is present in the out-of-the-box condition. If we want to use the Bluetooth—and we do—we need to connect over WIFI and upload a new boot loader so that we can load a full-featured PetaLinux image from the eMMC flash. The first challenge of course is to connect over WIFI. We will look at that in the rest of this blog.


The first step is to download the demo application files from the MiniZed Website. This provides us with the following files which we need to use in the demo:


  • bin – a boot loader used to load the boot image from eMMC Flash
  • ub – PetaLinux with the Bluetooth stack
  • conf – Configuration file where we can define the WIFI SSID and Key


To correctly set up the MiniZed for our future adventures, we will also need a USB memory stick. On our host PC, we need to open the file wpa_supplicant.conf using a program like notepad++. We then add our network’s SSID and PSK so that the MiniZed can connect to our network. Once this is done, we save the file to the USB memory stick’s root.





Setting the WIFI SSID and PSK



The next step is to power on the MiniZed board and connect to a PC using a USB cable from the computer’s USB port to the MiniZed board’s JTAG UART connector. Connect a second USB cable from the MiniZed’s auxiliary input connector for power. We need to do this because of the USB port’s current supply limits. Without the auxiliary USB cable, we can’t be sure that the memory stick can be powered correctly when plugged into the MiniZed board.


Press the MiniZed board’s reset button and you should see the Linux OS boot in your terminal screen. Once booted, log in with the password and username of root.


We can then plug in the USB memory stick. The MiniZed board should discover the USB memory stick and you should see it reported in the terminal window:





Memory Stick detection




To log on to our WIFI network, we need to copy this file to the eMMC. To do this, we issue the following commands via the terminal.




These commands change the directory to the eMMC and erases anything within it before changing directory to the USB memory stick and listing the contents, where we should see our wpa_supplicant.conf file.


The next step is to copy the file from the USB memory stick to the eMMC and check that it has been copied correctly:




We are then ready to start the WIFI we can do this by navigating to





You should see this:







Now we are connected to the WIFI we can enable the blue tooth and transfer files wirelessly which we will look at next time.




Code is available on Github as always.




If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.



MicroZed Chronicles hardcopy.jpg 



  • Second Year E Book here
  • Second Year Hardback here



MicroZed Chronicles Second Year.jpg 


Adam Taylor’s MicroZed Chronicles, Part 206: Software for the Digilent Nexys Video Project

by Xilinx Employee ‎07-12-2017 10:21 AM - edited ‎07-12-2017 11:04 AM (6,150 Views)


By Adam Taylor


With the MicroBlaze soft processor system up and running on the Nexys Video Artix-7 FPGA Trainer Board, we need some software to generate a video output signal. In this example, we are going to use the MicroBlaze processor to generate test patterns. To do this, we’ll will write data into the Nexys board’s DDR SDRAM so that the VDMA can read this data and output it over HDMI.


The first thing we will need to do in the software is define the video frames, which are going to be stored in memory and output by the VDMA. To do this, we will define three frames within memory. We will define each frame as a two-dimensional array:




Where DISPLAY_NUM_FRAME is set to 3 and DEMO_MAX_FRAME is set to 1920 * 1080 * 3. This takes into account the maximum frame resolution and the final multiplication by 3 accommodates each pixel (8 bits each for red, green, and blue).


To access these frames, we use an array of pointers to the each of the three frame buffers. Defining things this way eases our interaction with the frames.


With the frames defined, the next step it is to initialize and configure the peripherals within the design. These are:


  • VDMA – Uses DMA to move data from the board’s DDR SDRAM to the output video chain.
  • Dynamic Clocking IP – Outputs the pixel clock frequency and multiples of this frequency for the HDMI output.
  • Video Timing Controller 0 – Defines the output display timing depending upon resolution.
  • Video Timing Controller 1 – Determines the video timing on the input received. In this demo, this controller graba input frames from a source.


To ensure the VDMA functions correctly, we need to define the stride. This is the separation between each line within the DDR memory. For this application, the stride is 3 * 1920, which is the maximum length of a line.

When it comes to the application, we will be able to set different display resolutions from 640x480 to 1920x1080.






No matter what resolution we select, we will be able to draw test patterns on the screen using software functions that write to the DDR SDRAM.  When we change functions, we will need to reconfigure the VDMA, Video Timing Generator 0, and the dynamic clocking module.


Our next step is to generate video output. With this example, there are many functions within the main application that generate, capture, and display video. These are:


  1. Bar Test Pattern – Generates several color bars across the screen
  2. Blended Test Pattern – Generates a blended color test pattern across the screen
  3. Streaming from the HDMI input to the output
  4. Grab an input frame and invert colors
  5. Grab an input frame and scale to the current display resolution


Within each of these functions we pass a pointer to the frame currently being output so that we can modify the pixel values in memory. This can be done simply as shown in the code snippet below, which sets the red, blue, and green pixels. Each pixel color value is unsinged 8 bits.






When we run the application, we can choose which of the functions we want to exercise using the menu output over the UART terminal:







Setting the program to output color bars and the blended test gave the outputs below on my display:







Now we know how we can write information to DDR memory and see it appear on our display. We could generate a Mandelbrot pattern using this approach pretty simply and I will put that on my list of things to cover in a future blog.



Code is available on Github as always.




If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.



 MicroZed Chronicles hardcopy.jpg



  • Second Year E Book here
  • Second Year Hardback here


 MicroZed Chronicles Second Year.jpg




SoundAI MicA Development Kit for Far-field Speech-Recognition Systems: Powered by Xilinx Spartan-6 FPGA

by Xilinx Employee ‎07-11-2017 09:18 AM - edited ‎07-12-2017 10:49 AM (5,712 Views)


Voice control is hot. Witness Amazon Echo and Google Home. These products work because they’re designed to recognize the spoken word from a distance—far-field speech recognition. It’s a useful capability in a wide range of consumer, medical, and industrial applications and SoundAI now has a kit you can use far-field speech recognition to differentiate your next system design whether it’s a smart speaker; an in-vehicle, speech-based control system; a voice-controlled IoT or IIoT device; or some other never-seen-before device. The SoundAI 60C MicA Development Kit employs FPGA-accelerated machine learning and FPGA-based signal processing to implement advanced audio noise suppression, de-reverberation, echo cancellation, direction-of-arrival detection, and beamforming. The FPGA acceleration is performed by a Xilinx Spartan-6 SLX4 FPGA. (There’s also an available version built into a smart speaker.)




SoundAI MicA Development Kit for Far-Field Speech Recognition.jpg


SoundAI 60C MicA Development Kit for Far-Field Speech Recognition



The SoundAI MicA Development Kit’s circular circuit board measures 3.15 inches (80mm) in diameter and incorporates 7 MEMS microphones and 32 LEDs in addition to the Spartan-6 FPGA. According to SoundAI, the kit can capture voice from as far as 5m away, detect commands embedded in the 360-degree ambient sound, localize the voice to within ±10°, and deliver clean audio to the speech-recognition engine (Alexa for English and SoundAI for Chinese).




By Adam Taylor


With the Vivado design for the Lepton thermal imaging IR camera built and the breakout board connected to the Arty Z7 dev board, the next step is to update the software so that we can receive and display images. To do this, we can also use the HDMI-out example software application as this correctly configures the board’s VDMA output. We just need to remove the test-pattern generation function and write our own FLIR control and output function as a replacement.


This function must do the following:



  1. Configure the I2C and SPI peripherals using the XIICPS and XSPI API’s provided when we generated the BSP. To ensure that we can communicate with the Lepton Camera, we need to set the I2C address to 0x2A and configure the SPI for CPOL=1, CPHA=1, and master operation.
  2. Once we can communicate over the I2C interface to determine that the Lepton camera module is ready, we need to read the status register. If the camera is correctly configured and ready when we read this register, the Lepton camera will respond with 0x06.
  3. With the camera module ready, we can read out an image and store it within memory. To do this we execute several SPI reads.
  4. Having captured the image, we can move the stored image into the memory location being accessed by VDMA to display the image.



To successfully read out an image from the Lepton camera, we need to synchronize the VoSPI output to find the start of the first line in the image. The camera outputs each line as a 160-byte block (Lepton 2) or two 160-byte blocks (Lepton 3), and each block has a 2-byte ID and a 2-byte CRC. We can use this ID to capture the image, identify valid frames, and store them within the image store.


Performing steps 3 and 4 allows us to increase the size of the displayed image on the screen. The Lepton 2 camera used for this example has a resolution of only 80 horizontal pixels by 60 vertical pixels. This image would be very small when displayed on a monitor, so we can easily scale the image to 640x480 pixels by outputting each pixel and line eight times. This scaling produces a larger image that’s easier to recognize on the screen although may look a little blocky.


However, scaling alone will not present the best image quality as we have not configured the Lepton camera module to optimize its output. To get the best quality image from the camera module, we need to use the I2C command interface to enable parameters such as AGC (automatic gain control), which affects the contrast and quality of the output image, and flat-field correction to remove pixel-to-pixel variation.


To write or read back the camera module’s settings, we need to create a data structure as shown below and write that structure into the camera module. If we are reading back the settings, we can then perform an I2C read to read back the parameters. Each 16-bit access requires two 8-bit commands:


  • Write to the command word at address 0x00 0x04.
  • Generate the command-word data formed from the Module ID, Command ID, Type, and Protection bit. This word informs the camera module which element of the camera we wish to address and if we wish to read, write, or execute the command.
  • Write the number of words to be read or written to the data-length register at address 0x00 0x06.
  • Write the number of data words to addresses 0x00 0x08 to 0x00 0x26.


This sequence allows us to configure the Lepton camera so that we get the best performance. When I executed the updated program, I could see the image that appears below, of myself taking a picture of the screen on the monitor screen. The image has been scaled up by a factor of 8.  






Now that we have this image on the screen, I want to integrate this design with MiniZed dev board and configure the camera to transfer images over a wireless network.


Code is available on Github as always.


If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.



MicroZed Chronicles hardcopy.jpg 



  • Second Year E Book here
  • Second Year Hardback here


MicroZed Chronicles Second Year.jpg 







YouTube teardown and repair videos are one way to uncover previously unknown applications of Xilinx components. Today I found a new-this-week video teardown and repair of a non-operational Agilent (now Keysight) 53152A 46GHz Microwave Frequency Counter that uncovers a pair of vintage Xilinx parts: an XC3042A FPGA (with 144 CLBs!) and an XC9572 CPLD with 72 macrocells. Xilinx introduced the XC3000 FPGA family in 1987 and the XC9500 CPLD family appeared a few years later, so these are pretty vintage examples of early programmable logic devices from Xilinx—still doing their job in an instrument that Agilent introduced in 2001. That’s a long-lived product!


Looking at the pcb, I’d say that the XC3042A FPGA implements a significant portion of the microwave counter’s instrumentation logic and the XC9572 CPLD connects all of the LSI components to the adjacent microprocessor. (These days, I could easily see replacing the board’s entire upper-left quadrant’s worth of ICs with one Zynq SoC. Less board space, far more microprocessor and programmable-logic performance.)



Agilent 53152A Microwave Frequency Counter Main Board with Xilinx FPGA and CPLD.jpg 


Agilent 53152A Microwave Frequency Counter main board with vintage Xilinx FPGA and CPLD

(seen in the upper left)




A quick look at the Keysight Web site shows that the 53152A counter is still available and lists for $19,386. If you look at it through my Xilinx eyeglasses, that’s a pretty good multiplier for a couple of Xilinx parts that were designed twenty to thirty years ago. The 42-minute video was made by YouTube video makers Shahriar and Shayan Shahramian, who call their Patreon-supported channel “The Signal Path.” In this video, Shahriar manages to repair this 53152A counter that he bought for about $650—so he’s doing pretty well too. (Spoiler alert: The problem's not with the Xilinx devices, they still work fine.)


I really enjoy watching well-made repair videos of high-end equipment and always learn a trick or two. This video by The Signal Path is indeed well made and takes its time explaining each step and why they’re performed. Other than telling you that the Xilinx parts are not the problem, I’m not going to give the plot away (other than to say, as usual, that the butler did it).  



Here’s the video:






I’m sure you realize that Xilinx continues to sell FPGAs—otherwise, you wouldn’t be on this blog page—although today’s FPGAs are a lot more advanced with many hundreds of thousands or millions of logic cells. But perhaps you didn’t realize that Xilinx is still in the CPLD business. If that’s a surprise to you, I recommend that you read this: “Shockingly Cool Again: Low-power Xilinx CoolRunner-II CPLDs get new product brief.” Xilinx CoolRunner-II CPLDs aren't offered in the 72-macrocell size, but you can get them with as many as 384 macrocells if you wish.




Freelance documentary cameraman, editor, and producer/director Johnnie Behiri has just published a terrific YouTube video interview with Sebastian Pichelhofer, acting Project Leader of Apertus’ Zynq-based AXIOM Beta open-source 4K video camera project. (See below for more Xcell Daily blog posts about the AXIOM open-source 4K video camera.) This video is remarkable in the amount of valuable information packed into its brief, 20-minute duration. This video is part of Behiri’s cinema5D Web site and there’s a companion article here.


First, Sebastian explains the concept behind the project: develop a camera with features in demand, with development funded by a crowd-funding campaign. Share the complete, open-source design with community members so they can hack it, improve it, and give these improvements and modifications back to the community.


A significant piece of news: Sebastian says that the legendary Magic Lantern team (a group dedicated to adding substantial enhancements to the video and imaging capabilities of Canon dSLR cameras, is now on board as the project’s color-science experts. As a result, says Sebastian, the camera will be able to feature push-button selection of different “film stocks.” Film selection was one way for filmmakers to control the “look” of a film, back in the days when they used film. These days, camera companies devote a lot of effort into developing their own “film” look, but the AXIOM Beta project wants flexibility in this area, as in all other areas. I think Sebastian’s discussion of camera color science from end to end is excellent and worth watching just by itself.


I also appreciated Sebastian’s very interesting discussion of the challenges associated with a crowd-funded, open-source project like the AXIOM Beta. The heart of the AXIOM Beta camera’s electronic package is a Zynq SoC on an Avnet MicroZed SOM and that design choice strongly supports the project team’s desire to be able to quickly incorporate the latest innovations and design changes into systems in the manufacturing process. Here's a photo captured from the YouTube interview:




AXIOM Beta Interview Screen Capture 1.jpg 




At 14:45 in the video, Sebastian attempts to provide an explanation of the FPGA-based video pipeline’s advantages in the AXIOM Beta 4K camera—to the non-technical Behiri (and his mother). It’s not easy to contrast the sequential processing of microprocessor-based image and video processing with the same processing on highly parallel programmable logic when talking to a non-engineering audience, especially on the fly in a video interview, but Sebastian makes a valiant effort. By the way, the image-processing pipeline’s design is also open-source and Sebastian suggests that some brave souls may well want to develop improvements.


At the end of the interview, there are some video clips captured by a working AXIOM prototype. Of course, they are cat videos. How appropriate for YouTube! The videos are nearly monochrome (gray cats) and shot wide open so there’s a very shallow depth of field, but they still look very good to me for prototype footage. (There are additional video clips including HDR clips here on Apertus’ Web site.)




Here’s the cinema5D video interview:







Additional Xcell Daily posts about the AXIOM Beta open-source video camera project:








By Adam Taylor


Over this blog series, I have written a lot about how we can use the Zynq SoC in our designs. We have looked at a range of different applications and especially at embedded vision. However, some systems use a pure FPGA approach to embedded vision, as opposed to an SoC like the members in the Zynq family, so in this blog we are going to look at how we can get a simple HDMI input-and-output video-processing system using the Artix-7 XC7A200T FPGA on the Nexys Video Artix-7 FPGA Trainer Board. (The Artix-7 A200T is the largest member of the Artix-7 FPGA device family.)


Here’s a photo of my Nexys Video Artix-7 FPGA Trainer Board:






Nexys Video Artix-7 FPGA Trainer Board




For those not familiar with it, the Nexys Video Trainer Board is intended for teaching and prototyping video and vision applications. As such, it comes with the following I/O and peripheral interfaces designed to support video reception, processing, and generation/output:



  • HDMI Input
  • HDMI Output
  • Display Port Output
  • Ethernet
  • UART
  • USB Host
  • 512 MB of DDR SDRAM
  • Line In / Mic In / Headphone Out / Line Out
  • FMC



To create a simple image-processing pipeline, we need to implement the following architecture:







The supervising processor (in this case, a Xilinx MicroBlaze soft-core RISC processor implemented in the Artix-7 FPGA) monitors communications with the user interface and configures the image-processing pipeline as required for the application. In this simple architecture, data received over the HDMI input is converted from its parallel format of Video Data, HSync and VSync into an AXI Streaming (AXIS) format. We want to convert the data into an AXIS format because the Vivado Design Suite provides several image-processing IP blocks that use this data format. Being able to support AXIS interfaces is also important if we want to create our own image-processing functions using Vivado High Level Synthesis (HLS).


The MicroBlaze processor needs to be able to support the following peripherals:



  • AXI UART – Enables communication and control of the system
  • AXI Timer Enables the MicroBlaze to time events

  • MicroBlaze Debugging Module – Enables the debugging of the MicroBlaze

  • MicroBlaze Local Memory – Connected to DLMB and ILMB (Data & Instruction Local Memory Bus)


We’ll use the memory interface generator to create a DDR interface to the board’s SDRAM. This interface and the SDRAM creates a common frame store accessible to both the image-processing pipeline and the supervising processor using an AXI interconnect.


Creating a simple image-processing pipeline requires the use of the following IP blocks:



  • DVI2RGB – HDMI input IP provided by Digilent
  • RGB2DVI – HDMI output IP provided by Digilent
  • Video In to AXI4-Stream – Converts a parallel-video input to AXI Streaming protocol (Vivado IP)
  • AXI4-Stream to Video Out – Converts an AXI-Stream-to-Parallel-video output (Vivado IP)
  • Video Timing Controller Input – Detects the incoming video parameters (Vivado IP)
  • Video Timing Controller Output – Generates the output video timing parameters (Vivado IP)
  • Video Direct Memory Access – Enables images to be written to and from the DDR SDRAM



The core of this video-processing chain is the VDMA, which we use to move the image into the DDR memory.







The diagram above demonstrates how the IP block converts from streamed data to memory-mapped data for the read and write channels. Both VDMA channels provide the ability to convert between streaming and memory-mapped data as required. The write channel supports Stream-to-Memory-Mapped conversion while the read channel provides Memory-Mapped-to-Stream conversion.


When all this is put together in Vivado to create the initial base system, we get the architecture below, which is provided by the Nexys Video HDMI example.







All that is required now is to look at the software required to configure the image-processing pipeline. I will explain that next time.




Code is available on Github as always.




If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.



MicroZed Adam Taylor Special Edition.jpg




  • Second Year E Book here
  • Second Year Hardback here


MicroZed Chronicles Second Year.jpg 




True audiophiles often know no bounds on the energy they’ll put into their quest for “perfect” sound and Patrick Cazeles, who goes by “patc” online, is no exception. His attraction to high-end audio started in 2002, veered into FPGAs as early as 2004, and his quest to develop a high-end digital-audio playback system has been intertwined with the Zynq SoC since in 2014, not too long after Xilinx started shipping the first Zynq devices. First, Patrick bought an Avnet MicroZed dev board; then he switched to a Zynq-based Parallela board; and now he’s using the low-cost snickerdoodle board from krtkl, which is based on a Zynq Z-7020 SoC. "What an amazing piece of hardware the Zynq is!" he writes.


Here’s a photo of the Patrick’s Zyntegrated Digital Audio system in its current incarnation with an inset photo of his complete audio system:



Zyntegrated Audio System by patc.jpg 



And here’s a block diagram showing all of the Zyntegrated Digital Audio system’s capabilities:




Zyntegrated Audio System block diagram by patc.jpg 




As you can see, the Zynq SoC implements nearly everything in the Zyntegrated Digital Audio System from interfacing to the audio sources (a SATA CD player and an SD card) to controlling the touch-panel LCD user interface, receiving remote IR commands, accepting SPDIF digital-audio input, driving eight class-D audio amps in a quad-amped arrangement (four amps per channel driving separate audio frequency bands, where the Zynq SoC’s programmable logic performs the bandpass and low-pass filtering for all four bands—for each stereo channel), and taking room acoustic measurements from a digitized microphone for digital room correction (again over a SPDIF interface).


Whew! This guy knows how to drive a Zynq SoC!


So, how does it work? Here’s a new, 4-minute video of the Zyntegrated Digital Audio System in action, playing audio from a digitized vinyl-record turntable, a standard audio CD, and WAV files stored on an SD card:





Isn’t that touch-screen interface amazing? The audio sounds nice too, but I suspect there’s considerable compression taking place to pack the quad-amped audio into YouTube’s teeny, tiny sound channel.


Here’s a recent 3-minute video, in which Patrick provides a detailed walkthrough of the current Zyntegrated Digital Audio system’s design:






Interested in even more details? Here’s a detailed, chronological forum message stream detailing the development of this amazing audio system back to the year 2014 on the Parallela Community forum.



This week, EEVBlog’s Dave Jones did another DSO teardown—this time on an Owon 14-bit, 200MHz XDS3202A DSO—and found not one but two Xilinx Spartan-6 FPGAs. The Owon XDS3202A DSO has the unusual feature of being able to sample analog signals at 8 bits/1Gsamples/sec, per channel; 12 bits/500Msamples/sec, per channel; or 14 bits/100Msamples/sec, per channel. It does this using two Hittite ADCs with the 14/12/8-bit high-speed sampling built in, as Dave explains in the video. (Analog Devices now own Hittite.)


Dave’s teardown finally gets down to the FPGA electronics more than 10 minutes into the video and at 15:15, he’s pulled off the heatsink to reveal not one but two Xilinx Spartan-6 FPGAs on the DSO’s main board. One of the Spatan-6 FPGAs, a 6CSLX75, manages the DSO’s two ADCs and stuffs the sampled data into a pair of SK hynix DDR3 SDRAMs. A second Spartan-6 FPGA, this time a 6CSLX9, implements the DSO’s 25/50MHz arbitrary waveform generator.








The RISC-V open-source processor has a growing ecosystem and user community so it’s not surprising that someone would want to put one of these processors into a low-cost FPGA like a Xilinx Artix-7 device. And what could be easier than doing so using an existing low-cost dev board? Cue Digilent’s Arty Dev Board, currently on sale for $89.99 here. Normally, you’d find a copy of the Xilinx MicroBlaze soft RISC processor core inside of Arty’s Artix-7 FPGA but a SiFive Freedom E310 microcontroller platform that combines a RISC-V processor with peripherals seems to fit just fine so that’s just what Andrew Black has done using the no-cost Xilinx Vivado HL WebPack Edition to compile the HDL.



ARTY v4.jpg


Digilent’s ARTY Artix-7 FPGA Dev Board



With Black’s step-by-step instructions based on SiFive's "Freedom E300 Arty FPGA Dev Kit Getting Started Guide", you can do the same pretty easily. (See “Build an open source MCU and program it with Arduino.”)



Andrew Back is an open-source advocate, Treasurer and Director of the Free and Open Source Silicon Foundation, organizer of the Wuthering Bytes technology festival and founder of the Open Source Hardware User Group.


Note: For more information on the Digilent Arty Dev Board, see “ARTY—the $99 Artix-7 FPGA Dev Board/Eval Kit with Arduino I/O and $3K worth of Vivado software. Wait, What????” and “Free Webinar on $99 Arty dev kit, based on Artix-7 FPGA, now online.”






Drone maker Zerotech announced the Dobby AI pocket-sized drone earlier this year. Now, there’s a Xilinx video of DeePhi Tech’s Fuzhang Shi explaining a bit more about the machine-learning innards of the Dobby AI drone, which uses deep-learning algorithms for tasks including pedestrian detection, tracking, and gesture recognition. DeePhi’s algorithms are running on a Xilinx Zynq Z-7020 SoC integrated into the Dobby AI drone.


Power consumption, stability, and cost are all critical factors in drone design and DeePhi developed a low-power, low-cost, high-stability system using the Zynq SoC, which executes 230GOPS while consuming a mere 3W. This is far more power-efficient than running similar application on CPUs or GPUs, explains Fuzhang Shi.




Dobby AI PCB with Zynq SoC.jpg


Zerotech’s Dobby AI Palm-sized autonomous drone pcb with Zynq Z-7020 SoC running DeePhi deep-learning algorithms






Here’s the 2-minute video:



Vivado HLx Logo.jpg 

You can now download the Vivado Design Suite 2017.2 HLx editions, which include many new UltraScale+ devices:


  • Kintex UltraScale+ XCKU13P
  • Zynq UltraScale+ MPSoCs XCZU7EG, XCZU7CG, and XCZU15EG
  • XA Zynq UltraScale+ MPSoCs XAZU2EG and XAZU3EG



In addition, the low-cost Spartan-7 XC7S50 FPGA has been added to the WebPack edition.


Download the latest releases of the Vivado Design Suite HL editions here.






Last month, Xilinx Product Marketing Manager Darren Zacher presented a Webinar on the extremely popular $99 Arty Dev Kit, which is based on a Xilinx Artix-7 A35T FPGA, and it’s now online. If you’re wondering if this might be the right way for you to get some design experience with the latest FPGA development tools and silicon, spend an hour with Zacher and Arty. The kit is available from Avnet and Digilent.


Register to watch the video here.



ARTY v4.jpg 



For more information about the Arty Dev Kit, see: “ARTY—the $99 Artix-7 FPGA Dev Board/Eval Kit with Arduino I/O and $3K worth of Vivado software. Wait, What????






There’s considerable 5G experimentation taking place as the radio standards have not yet gelled and researchers are looking to optimize every aspect. SDRs (software-defined radios) are excellent experimental tools for such research—NI’s (National Instruments’) SDR products especially so because, as the Wireless Communication Research Laboratory at Istanbul Technical University discovered:


“NI SDR products helped us achieve our project goals faster and with fewer complexities due to reusability, existing examples, and the mature community. We had access to documentation around the examples, ready-to-run conceptual examples, and courseware and lab materials around the grounding wireless communication topics through the NI ecosystem. We took advantage of the graphical nature of LabVIEW to combine existing blocks of algorithms more easily compared to text-based options.”


Researchers at the Wireless Communication Research Laboratory were experimenting with UFMC (universal filtered multicarrier) modulation, a leading modulation candidate technique for 5G communications. Although current communication standards frequently use OFDM (orthogonal frequency-division multiplexing), it is not considered to be a suitable modulation technique for 5G systems due to its tight synchronization requirements, inefficient spectral properties (such as high spectral side-lobe levels), and cyclic prefix (CP) overhead. UFMC has relatively relaxed synchronization requirements.


The research team at the Wireless Communication Research Laboratory implemented UFMC modulation using two USRP-2921 SDRs, a PXI-6683H timing module, and a PXIe-5644R VST (Vector signal Transceiver) module from National Instruments (NI)–and all programmed with NI’s LabVIEW systems engineering software. Using this equipment, they achieved better spectral results over the OFDM usage and, by exploiting UFMC’s sub-band filtering approach, they’ve proposed enhanced versions of UFMC. Details are available in the NI case study titled “Using NI Software Defined Radio Solutions as a Testbed of 5G Waveform Research.” This project was a finalist in the 2017 NI Engineering Impact Awards, RF and Mobile Communications category, held last month in Austin as part of NI Week.



5G UFMC Modulation Testbed.jpg 



5G UFMC Modulation Testbed based on Equipment from National Instruments



Note: NI’s USRP-2921 SDR is based on a Xilinx Spartan-6 FPGA; the NI PXI-6683 timing module is based on a Xilinx Virtex-5 FPGA; and the PXIe-5644R VST is based on a Xilinx Virtex-6 FPGA.






Latest Embedded Muse newsletter publishes testimonial for Saleae and its FPGA-based Logic Analyzers

by Xilinx Employee ‎06-12-2017 01:16 PM - edited ‎06-12-2017 01:17 PM (7,343 Views)


My very good friend Jack Ganssle, the firmware expert and consultant, publishes a free semi-monthly newsletter called the Embedded Muse for engineers working on embedded systems and the latest issue, #330, contains this testimonial for Saleae and its FPGA-based logic analyzers:


“Another reader has nice things to say about Saleae. Dan Smith writes:


“I've owned Saleae logic analyzers since 2011, starting with their original (and at that time, only) logic analyzer, aptly named "Logic", which was an 8-channel unit and very good host-side software to go with it.  Never had a problem with the unit or the software.


“Fast forward to a Saturday morning in 2017, where I was debugging a strange bus timing problem under a tight project deadline.  I'd woken up early because I couldn't "turn my mind off" from the night before.  I was using a newer, more expensive model with features I needed (analog capture, must higher bandwidth, etc.)  For some reason, the unit wouldn't enumerate when I plugged it in that morning.  I contacted support on a Saturday morning; to my surprise, I had a response later that day from Mark, one of the founders and also the primary architect and developer of the host-side software. After discussing the problem and my situation, they sent me a replacement unit right away, even before the old unit was returned and inspected.  I'd also received excellent support a few weeks earlier getting the older Logic unit working with a strange combination of MacBook Pro, outdated version of OSX and an uncooperative USB port.


“My point is simply that even though the Saleae products -- hardware and software -- are excellent, it's their customer service that has earned my loyalty.  Too often, great service and support go unmentioned; in my case, it's what saved me.  And yes, I debugged the problem that weekend and met the deadline!”



Saleae engineers its compact, USB-powered logic analyzers using FPGAs like the Xilinx Spartan-6 LX16 FPGA used in its Logic Pro 8 logic analyzer/scope. (See “Jack Ganssle reviews the Saleae Logic Pro 8 logic analyzer/scope, based on a Spartan-6 FPGA” and “Compact, 8-channel, 500Msamples/sec logic analyzer relies on Spartan-6 FPGA to provide most functions.”) Although the Spartan-6 FPGA is a low-cost device, its logic and I/O programmability are a great match for the logic analyzer’s I/O and data-capture needs.



Saleae Logic Analyzer with Spartan-6 FPGA.jpg


Saleae Logic Pro 8 logic analyzer/scope




“Cancer” is one of the scariest words in the English language and 1.6 million people in the US alone will be diagnosed with some form of cancer just this year. About 320,000 of those diagnosed cases will be eligible for proton therapy but there are currently only 24 proton treatment centers in the US. The math says that about 5% of the eligible patients will receive proton therapy and the rest will be treated another way. That’s not a great solution and this graph illustrates why:




Proton therapy Energy chart.jpg 



Protons can deliver more energy to the tumor and less energy to surrounding healthy tissue.


One reason that there are few proton-treatment centers in the US is because you need a synchrotron or cyclotron to create a sufficiently strong beam of high-energy protons. ProNova is developing a lower-cost, smaller, lighter, and more energy efficient proton-therapy system called the ProNova SC360 will make proton therapy more available to cancer patients. It’s doing that by developing a cyclotron using superconducting magnets, a multiplexed delivery system that can deliver the resulting proton beam to as many as five treatment rooms, and a treatment gantry that can securely hold and position the patient while precisely delivering a 4mm-to-8mm proton beam to the tumor with 1mm positioning accuracy.



ProNova 360 Proton treatment System.jpg 


ProNova SC360 Proton Therapy System




It takes a lot of real-time control to do all of this, so ProNova developed the DDS (Dose Delivery System) for the SC360 using three National Instruments (NI) sbRIO-9626 embedded controllers, which incorporate a Xilinx Spartan-6 LX45 FPGA for real-time control. The three controllers implement four specific tasks:


  • Control the proton beam intensity
  • Position the proton beam using scanning magnets
  • Monitor delivered dosage to within 1% of the prescribed dose
  • Monitor all aspects of the proton beam and shut off the beam in the case of a fault


A treatment plan contains a set of locations, or spots, in 3D space (horizontal-X, vertical-Y, depth-Z) that each receive a prescribed radiological dose. The system delivers the high-energy protons to the tumor by scanning the variable-intensity beam back and forth through the tumor volume, as shown below:



ProNova Scanning Diagram.jpg 




In addition, these subsystems are responsible for safely removing the beam from the treatment room during spot transitions and enforcing safety interlocks. Hard-wired control signals pass between the Spartan-6 FPGAs on each of the sbRIO controllers to signal spot completion, spot advancement, and treatment faults. Each of these three sbRIO applications is programmed using NI’s LabVIEW systems engineering software.


Here’s a block diagram of the ProNova DDS beam-control and -positioning system:




ProNova Beam Control Block Diagram.jpg 



Here’s an example of the kinds of signals generated by this control system:




ProNova Waveforms.jpg



Proton-beam spot durations are on the order of 5msec and spot transitions take less than 800μsec.


ProNova received FDA approval for the SC360 earlier this year and plans to start treating the first patients later this year at the Provision Center for Proton Therapy in Knoxville, Tennessee.


Here’s a 4-minute video explaining the system in detail. It starts sort of over the top, but quickly settles down to the facts:




This life-changing project won a 2017 NI Engineering Impact Award in the Industrial Machinery and Control category last month at NI Week and the 2017 Humanitarian Award. It is documented in this NI case study.


Medium- and heavy-duty fleet vehicles account for a mere 4% of the vehicles in use today but they consume 40% of the fuel used in urban environments, so they are cost-effective targets for innovations that can significantly improve fuel economy. Lightning Systems (formerly Lightning Hybrids) has developed a patented hydraulic hybrid power-train system called ERS (Energy Recovery System) that can be retrofitted to new or existing fleet vehicles including delivery trucks and shuttle buses. This hybrid system can reduce fleet fuel consumption by 20% and decrease NOx emissions (the key component of smog) by as much as 50%! In addition to being a terrific story about energy conservation and pollution control, the development of the ERS system tells a great story about using National Instruments’ (NI’s) comprehensive line of LabVIEW-compatible CompactRIO (cRIO) and Single-Board RIO (sbRIO) controllers to develop embedded controllers destined for production.


Like an electric hybrid vehicle power train, an ERS-enhanced power train recovers energy during vehicle braking and adds that energy back into the power train during acceleration. However, Lightning Systems’ ERS stores the energy using hydraulics instead of electricity.


Here are the components of the ERS retrofit system, shown installed in series with a power train’s drive shaft:




Lightning Hybrids ERS Diagram.jpg 



Major components in the Lightning Systems ERS Hybrid Retrofit System




The power-transfer module (PTM) in the above image drives the hydraulic pump/motor during vehicle braking, pumping hydraulic fluid into the high- and low-pressure accumulator tanks, which act like mechanical batteries that store energy in tanks pressurized by nitrogen-filled bladders. When the vehicle accelerates, the pump/motor operates as a motor driven by the pressurized hydraulic fluid’s energy stored in the accumulators. The hydraulic motor puts energy back into the vehicle’s drive train through the PTM. A valve manifold controls the filling and emptying of the accumulator tanks during vehicle operation and all of the ERS control sequencing is handled by a National Instruments (NI) RIO controller programmed using NI’s LabVIEW system development software. All of NI’s Compact and Single-Board RIO controllers incorporate a Xilinx FPGA or a Xilinx Zynq SoC to provide real-time control of closed-loop systems.


Lightning Systems has developed four generations of ERS controllers based on NI’s CompactRIO and Single-Board RIO controllers. The company based its first ERS prototype controller on an 8-slot NI CRIO-9024 controller and deployed the design in pilot systems. A 2nd-generation ERS prototype controller used a 4-slot NI cRIO-9075 controller, which incorporates a Xilinx Spartan-6 LX25 FPGA. The 3rd-generation ERS controller used an NI sbRIO-9626 paired with a custom daughterboard. The sbRIO-9626 incorporates a larger Xilinx Spartan-6 LX45 FPGA and Lightning Systems fielded approximately 100 of these 3rd-generation ERS controllers.




Lightning Hybrids v2 v3 v4 ERS Controllers.jpg 


Three generations of Lightning Systems’ ERS controller (from left to right: v2, v3, and v4) based on

National Instruments' Compact RIO and Single-Board RIO controllers




For its 4th-generation ERS controller, the company is using NI’s sbRIO-9651 single-board RIO SOM (system on module), which is based on a Xilinx Zynq Z-7020 SoC. The SOM is also paired with a custom daughterboard. Using NI’s Zynq-based SOM reduces the controller cost by 60% while boosting the on-board processing power and adding in a lot more programmable logic. The SOM’s additional processing power allowed Lightning Systems to implement new features and algorithms that have increased fuel economy.




Lightning Hybrids v4 ERS Controller.jpg 


Lightning Systems v4 ERS Controller uses a National Instruments sbRIO-9651 SOM based on a

Xilinx Zynq Z-7020 SoC




Lightning Systems is able to easily migrate its LabVIEW code throughout these four ERS controller generations because all of NI’s CompactRIO and Single-Board RIO controllers are software-compatible. In addition, this controller design allows easy field upgrades to the software, which reduces vehicle downtime.


Lightning Systems has developed a modular framework so that the company can quickly retrofit the ERS to most medium- and heavy-duty vehicles with minimal new design work or vehicle modification. The PTM/manifold combination mounts between the vehicle’s frame rails. The accumulators can reside remotely, wherever space is available, and connect to the valve manifold through high-pressure hydraulic lines. The system is designed for easy installation and the company can typically convert a vehicle’s power train into a hybrid system in less than a day. Lightning Systems has already received orders for ERS hybrid systems from customers in Alaska, Colorado, Illinois, and Massachusetts, as well as around the world in India and the United Kingdom.




Lightning Hybrids Typical ERS Installation.jpg 


Typical Lightning Systems ERS Installation



This project recently won a 2017 NI Engineering Impact Award in the Transportation and Heavy Equipment category and is documented in this NI case study.




Blood pumps for extracorporeal life support (ECLS) are used in medical therapies to support failing human organ systems. Conventional blood pumps use mechanically driven impellers supported on bearings and these impellers are prone to stress and heat concentration on the shaft-bearing contact areas, which increases hemolysis (rupture or destruction of red blood cells) and thrombosis (blood clots). Both are bad news in the bloodstream. In addition, ECLS applications require that any components that touch the blood be disposable, to prevent infection.


The Precision Motion Control Lab at MIT and Ension, Inc. are developing a new type of blood pump with a low-cost, disposable, bearingless impeller to reduce costs in ECLS applications. Magnetic levitation through reluctance coupling replaces the impeller’s mechanical bearings and hysteresis coupling drives the impeller using magnetically induced torque, which eliminates the mechanical drive shaft. Both magnetic forces are supplied by a 12-coil electromagnet in this new design.


To further reduce the cost of the replaceable rotor/impeller assembly, the design team substituted a steel ring made of type D2 tool steel for the normal permanent magnet in the rotor. The “D2 ring” is inductively magnetized by the coupled magnetic fields from the stator electromagnets. Reluctance coupling pulls the outer edges of the ring, causing it to levitate, while a rotating magnetic field generated by the twelve stator coils imparts rotational torque on the D2 ring, causing the impeller to spin.


Controlling the stator coils to produce the correct magnetic fields for levitation and motion requires closed-loop control of all twelve electromagnets in the stator. The design team chose the National Instruments (NI) MyRIO Student Embedded Controller because it’s easily programmed in NI’s LabVIEW systems engineering software package and because the MyRIO’s integrated Xilinx Zynq Z-7010 SoC incorporates the high-speed programmable logic needed to provide real-time, deterministic, closed-loop stator control.


Here’s a photo of a prototype bearingless motor for this design, showing the 12-magnet stator and the D2 ring rotor on the left and a National Instruments MyRIO controller on the right (and yes, that’s the Xilinx Zynq SoC peeking through the plastic window in the MyRIO controller):




Bearingless Blood Pump Prototype Motor.jpg 




Closed-loop feedback comes from four eddy-current sensors, which are sense coils driven by Texas Instruments LDC1101 16-bit LDCs (inductance-to-digital converters). The four LDC boards appear in the upper left part of the above photo. The four eddy-current sensors are organized in two pairs that differentially measure real-time rotor position. Each sensor connects to the MyRIO controller and the Zynq SoC using individual 5-wire SPI interfaces, as shown below:




Bearingless Blood Pump LDC Detail.jpg 




The MyRIO controller drives the blood pump's 12-phase stator through twelve analog channels—built from an NI cRIO-9076 4-slot CompactRIO controller (with an integrated Xilinx Virtex-5 LX45 FPGA), three NI-9263 voltage output modules, and one NI 9205 voltage input module—and twelve custom linear transconductance power amplifiers. The flexibility this setup provides permits the design team to experiment with and refine different motor-control algorithms.


Closed-loop, drive-with-feedback control algorithms are implemented in the Zynq SoC’s programmable logic because software-based microcontroller or microprocessor control loops would not have been fast enough or sufficiently deterministic. Although this controller design is capable of implementing a 46KHz control loop, the actual loop rate is 10KHz because that’s fast enough for this electromechanical system. The Zynq SoC’s 32-bit ARM Cortex-A9 processors in the MyRIO controller implement the system’s user interface and data logging.


This project won a 2017 NI Engineering Impact Award in the Advanced Research category and is documented in this NI case study.



MIT and Continuum develop “Human Organ Systems Under Test” chip using Zynq-based NI MyRIO controller

by Xilinx Employee ‎05-30-2017 04:52 PM - edited ‎05-30-2017 05:06 PM (9,431 Views)


Last week in Austin at NI Week, MIT Professor Dr. Dave Trumper and Senior Software and Electrical Engineer Jared Kirschner from Continuum demonstrated a “Human Physiome on a Chip,” a collection of human organ tissues linked by nutrient flows and controlled by a National Instruments (NI) MyRIO controller, which is based on a Xilinx Zynq Z-7010 SoC. The Human Physiome chip, developed by an MIT team headed by Dr. Linda Griffith along with Continuum, contains wells for as many as ten different human cells from organs including liver, brain, gut, heart, kidney, pancreas, and bone marrow. The purpose of this “organ systems under test” device is to study the way human organ systems may respond to various drug therapies in vitro. The work is funded by DARPA.




Human Physiome Chip.jpg


7-Organ Version of MIT’s Human Physiome Chip



The nutrient system for the Human Physiome chip consists of as many as a dozen micropumps. Each micropump consists of three small pneumatic valves operated in a sequence that moves the fluid nutrient through the chip. Continuum developed a micropump controller using NI’s MyRIO student controller, one of many Zynq-based products in the NI RIO line of controllers. This controller has 36 control channels (12 pumps times three valves per pump) and pressure sensing. Software to operate the controller is based on NI’s LabVIEW system development software. Continuum needed to develop a system that could control twelve micropumps with a 1KHz update rate and chose the Zynq-based MyRIO controller as the appropriate design solution for this application.



Continuum Micropump controller based on NI MyRIO.jpg


Continuum built its Micropump controller around the Zynq-based NI MyRIO Platform



Here’s a 7-minute NI Week video describing this extremely unusual control application:








Note: For more information about the MyRIO controller, see “How Xilinx All Programmable technology has fundamentally changed business at National Instruments.”




Blue Origin New Shepard Rocket Launch.jpg



Blue Origin New Shepard Rocket Launch – photo courtesy of Blue Origin




Jason Smith, an Instrumentation and Controls Engineer at commercial space pioneer Blue Origins, spoke during a keynote at last week’s NI Week in Austin, TX. According to Smith, Blue Origin’s corporate mission is to “see millions of people living and working in space.” To that end, the company is developing two rocket systems. The first to be tested is the New Shepard, a fully reusable, vertical-takeoff, vertical-landing space vehicle designed for suborbital missions. (Alan Shepard was the first American to rocket into space aboard the Freedom 7 Mercury space capsule atop a Redstone rocket in a suborbital mission.) The New Shepard rocket uses one of the company’s 3rd-generation BE-3 engines, which burns liquid hydrogen using liquid oxygen as the oxidizer. Smith said that Blue Origin has already successfully launched and landed its New Shepard rockets five times and the first crewed flight is scheduled for next year.


The company is also developing the more powerful New Glenn rocket for low-Earth-orbit missions based on seven of its 4th-generation BE-4 engines, which burns liquefied natural gas and again using a liquid oxygen oxidizer. (John Glenn became the first American to reach Earth orbit in 1962 in the Friendship 7 Mercury capsule atop an Atlas rocket.) Blue Origin expects the BE-4 engine to be ready for testing sometime this year and United Launch Alliance (ULA)–maker of the Atlas V and Delta IV launch systems–has chosen the BE-4 engine to power its next generation Vulcan launch vehicle. The New Glenn rocket is designed to be reused as many as 100 times.



Blue Origins BE-4 Rocket Engine.jpg 


Blue Origin’s BE-4 Rocket Engine



Smith is responsible for developing test techniques and test cells to ensure that everything Blue Origin builds—from parachute to guidance fins to reusable rocket engines—works in rigorous launch-and-land missions. To ensure that all rocket components work as designed, Blue Origin builds dedicated test stands with thousands of monitoring and control channels based on National Instruments’ (NI’s) equipment and LabVIEW and TestStand software. Although earlier test systems at Blue Origin were based on NI’s PXI modular instrumentation and CompactDAQ data acquisition systems, Smith is standardizing the design of his newest test systems using NI’s cRIO-9068 8-slot CompactRIO extended-temperature controller because these systems provide the needed autonomy and robustness required by the demanding test-cell environments. NI’s cRIO-9068 incorporates a Xilinx Zynq Z-7020 SoC, which provides that autonomy and robust control.




NI cRIO-9068 v2.jpg 


NI cRIO-9068 CompactRIO Extended-Temperature Controller based on a Xilinx Zynq Z-7020 SoC




Here’s a 7-minute video of Jason Smith’s keynote at NI Week that includes some great shots of the New Shepard rocket taking off and landing. (I advise you to turn up the volume for this one and watch the amazing thrust vectoring as the rocket sticks its landing.)





Shockingly Cool Again: Low-power Xilinx CoolRunner-II CPLDs get new product brief

by Xilinx Employee ‎05-18-2017 11:51 AM - edited ‎05-19-2017 04:33 PM (22,667 Views)


This may shock you, but Xilinx continues to be in the CPLD business. I was reminded of that point this week when I got an email blast about the Xilinx CoolRunner-II CPLD family—first introduced about 15 years ago—which is still being made, sold, and supported. In fact, said the email blast, Xilinx has committed to 7+ years of supply for these devices. They also have a slick, updated product brief:



Xilinx CoolRunner-II Product Brief.jpg




Despite their age, Xilinx’s inexpensive, reprogrammable CoolRunner-II CPLDs are still pretty useful devices. They carry their own configuration in an on-chip EEPROM. Because CoolRunner-II CPLDs sip power—quiescent current can be a mere handful of μA for an XC2C32A device and there are some unique power-saving features in all of the devices including DataGATE and CoolCLOCK with DualEDGE flip-flops—they are often used for power sequencing and supervisory tasks. With maximum system toggle frequencies in the low hundreds of MHz, they’re capable of implementing fairly fast state machines as well. Yes, they’re used for glue logic too. They’re handy things to have in your design toolbox.


Need a very small programmable logic device for use on a tight pcb? The CoolRunner-II XC2C32A CPLD with 32 macrocells and 21 I/O pins is available in a 5x5mm QFG32 package and the CoolRunner-II XC2C64A CPLD with 64 macrocells and 37 I/O pins is available in a 7x7mm QFG48 package. Got an 8x8mm spot on your board? An XC2C256 CPLD can drop more than 100 I/O pins into that space.


You can still create designs for Coolrunner-II CPLDs using the Xilinx ISE Design Suite and, if you’ve not used these devices before, you can get a Digilent CoolRunner-II CPLD Starter Board for $39.99. The Starter Board incorporates a CoolRunner-II XC2C256 CPLD.



Digilent CoolRuner-II Starter Board.jpg 


$39.99 Digilent CoolRunner-II CPLD Starter Board







Never at a loss for words, Adam Taylor has just published some additional thoughts on designing with Xilinx All Programmable devices over at the EEWeb.com site. His post, titled “Make Something Awesome with the $99 FPGA-Based Arty Development Board,” serves as a reminder or an invitation to attend the free May 31 Xilinx Webinar titled “Make Something Awesome with the $99 Arty Embedded Kit.”


Here’s what Adam has to say about FPGA design today:


“Both the maker and hobby communities are increasingly using FPGAs within their designs. This is thanks to the provision of boards at the right price point for the market, coupled with the availability of easy-to-use development tools that include simulation and High-Level Synthesis (HLS) capabilities.


“Let's be honest; compared to the reputation FPGAs have had historically, developing FPGA-based designs in this day-and-age is much simpler. This is largely thanks to a wide range of IP modules that are supplied with the development tools from board vendors and places like OpenCores.”



Adam’s article discusses two low-cost Digilent boards:






Digilent Arty Z7.jpg 


Digilent Arty Z7 Development Board




Adam concludes his article with this: “Overall, if you are looking to take your first steps into the world of FPGAs, then the Arty (Artix-based) or the Arty Z7 (Zynq 7000-based) should be high on your list of development boards to consider.”




A paper titled “Evaluating Rapid Application Development with Python for Heterogeneous Processor-based FPGAs” that discusses the advantages and efficiencies of Python-based development using the PYNQ development environment—based on the Python programming language and Jupyter Notebooks—and the Digilent PYNQ-Z1 board, which is based on the Xilinx Zynq SoC, recently won the Best Short Paper award at the 25th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM 2017) held in Napa, CA. The paper’s authors—Senior Computer Scientist Andrew G. Schmidt, Computer Scientist Gabriel Weisz, and Research Director Matthew French from the USC Viterbi School of Engineering’s Information Sciences Institute—evaluated the impact of, the performance implications, and the bottlenecks associated with using PYNQ for application development on Xilinx Zynq devices. The authors then compared their Python-based results against existing C-based and hand-coded implementations.



The authors do a really nice job of describing what PYNQ is:



“The PYNQ application development framework is an open source effort designed to allow application developers to achieve a “fast start” in FPGA application development through use of the Python language and standard “overlay” bitstreams that are used to interact with the chip’s I/O devices. The PYNQ environment comes with a standard overlay that supports HDMI and Audio inputs and outputs, as well as two 12-pin PMOD connectors and an Arduino-compatible connector that can interact with Arduino shields. The default overlay instantiates several MicroBlaze processor cores to drive the various I/O interfaces. Existing overlays also provide image filtering functionality and a soft-logic GPU for experimenting with SIMT [single instruction, multiple threads] -style programming. PYNQ also offers an API and extends common Python libraries and packages to include support for Bitstream programming, directly access the programmable fabric through Memory-Mapped I/O (MMIO) and Direct Memory Access (DMA) transactions without requiring the creation of device drivers and kernel modules.”



They also do a nice job of explaining what PYNQ is not:



“PYNQ does not currently provide or perform any high-level synthesis or porting of Python applications directly into the FPGA fabric. As a result, a developer still must use create a design using the FPGA fabric. While PYNQ does provide an Overlay framework to support interfacing with the board’s IO, any custom logic must be created and integrated by the developer. A developer can still use high-level synthesis tools or the aforementioned Python-to-HDL projects to accomplish this task, but ultimately the developer must create a bitstream based on the design they wish to integrate with the Python [code].”



Consequently, the authors did not simply rely on the existing PYNQ APIs and overlays. They also developed application-specific kernels for their research based on the Redsharc project (see “Redsharc: A Programming Model and On-Chip Network for Multi-Core Systems on a Programmable Chip”) and they describe these extensions in the FCCM 2017 paper as well.




Redsharc Project.jpg




So what’s the bottom line? The authors conclude:


“The combining of both Python software and FPGA’s performance potential is a significant step in reaching a broader community of developers, akin to Raspberry Pi and Ardiuno. This work studied the performance of common image processing pipelines in C/C++, Python, and custom hardware accelerators to better understand the performance and capabilities of a Python + FPGA development environment. The results are highly promising, with the ability to match and exceed performances from C implementations, up to 30x speedup. Moreover, the results show that while Python has highly efficient libraries available, such as OpenCV, FPGAs can still offer performance gains to software developers.”


In other words, there’s a vast and unexplored territory—a new, more efficient development space—opened to a much broader system-development audience by the introduction of the PYNQ development environment.


For more information about the PYNQ-Z1 board and PYNQ development environment, see:







Epiq Solutions has announced the Matchstiq S12 SDR transceiver, an expansion to the Matchstiq transceiver family, which also includes the Matchstiq S10 and S11. All three Matchstiq family members pair a Freecale i.MX6 quad-core CPU, used for housekeeping and interfacing (Ethernet, HDMI, and USB), with a Xilinx Spartan-6 LX45T FPGA installed on the company’s Sidekiq MiniPCIe card, which performs the RF signal processing for SDR. These two devices, located on separate boards, communicate over a single PCIe lane and form a reusable SDR platform for the Matchstiq transceiver family. The Matchstiq S12 employs a Dropkiq frequency-extension board to take the bottom of its tuning frequency range below 1MHz. All three Matchstiq transceiver tuners top out at 6GHz and have 50MHz of channel bandwidth. The Matchstiq S10 and S11 SDR tuners go down to 70MHz.


Here are the block diagrams of all three Matchstiq transceivers, which illustrate the platform nature of the basic Matchstiq design:



Epiq Solutions Matchstiq RF Transceivers.jpg



Epic Solutions Matchstiq SDR Transceiver Block Diagrams




And here’s a family photo:




Epiq Solutions Matchstiq RF Transceivers family.jpg 



Epic Solutions Matchstiq SDR Transceiver Family








About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.