UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

 

Earlier this year, Shahriar Shahramian, who regularly posts in-depth electronics videos on his YouTube channel “The Signal Path,” reviewed and then tore down a Rohde & Schwarz Spectrum Rider FPH. This hand-held, battery-powered, portable spectrum analyzer has been on the market for a couple of years now and covers 5kHz to 2, 3, or 4GHz, depending on options. (Shahriar Shahramian is department head for millimeter-Wave ASIC Research at Nokia Bell Labs.)

 

 

 

Rohde and Schwarz 4GHz Spectrum Rider FPH.jpg 

 

 

Rohde & Schwarz Spectrum Rider FPH portable spectrum analyzer

 

 

 

Shahriar’s extensive user review and teardown video of the Spectrum Rider FPH lasts an hour, so here are some salient points in the video.

 

 

1:10 – Model comparison of R&S portable spectrum analyzers

 

5:44 – Instrument hardware overview and GUI characteristics

 

22:24 – Using the FPH Spectrum Rider to analyze unknown wireless signals including a multi-tone, QPSK modulated signal, AM/FM demodulation analysis and frequency hopping

 

46:46 – FPH teardown and analysis

 

55:33 – Overview of the Instrument View remote connection software

 

57:42 – Concluding remarks

 

 

 

Most important for Xcell Daily, at 48 minutes into the video Shahriar finally cracks the instrument apart and finds a Zynq SoC handling essentially all of the instrument’s RF digital signal processing; I/O control; its clean, responsive, elegant, and well-thought-out user interface based on hard buttons and a pinch-sensitive touch screen; and its Instrument View remote front-panel interface that operates over the USB port or Ethernet.

 

The Zynq SoC is a perfect fit for such an application. The Zynq Processing System (PS) handles the user interface and general supervision. The Zynq Programmable Logic (PL) section with its high-speed programmable logic and DSP slices handles the instrument’s high-speed control and signal processing.

 

Here’s a photo clipped from the video. Shahriar is pointing to the Zynq SoC in this photo:

 

 

 

Rohde and Schwarz 4GHz Spectrum Rider FPH Zynq Detail.jpg

 

A Zynq SoC manages the overall operation and digital signal processing for the Rohde & Schwarz Spectrum Rider FPH

 

 

 

You should note that this portable, hand-held instrument has an 8-hour battery life despite the rather sophisticated RF and digital electronics. I strongly suspect the high level of integration made possible by the Zynq SoC has something to do with this.

 

Because of its All Programmable flexibility, the Zynq SoC makes a terrific foundation for entire product families. You can see this here because Rohde and Schwarz has also used the Zynq SoC as the digital heart of its Scope Rider, a multi-function, hand-held, 2/4-channel, 500MHz DSO (as reported by the Powered by Xilinx Web page.) The family resemblance with the Spectrum Rider FPH is quite strong:

 

 

 

 

Rohde and Schwarz Scope Rider.jpg

 

Rohde & Schwarz Scope Rider

 

 

I’ll go out on a limb and guess that there’s a lot of shared code (not to mention case tooling) between the company’s Scope Rider and Spectrum Rider FPH. The Zynq SoC’s PS and PL along with its broadly programmable I/O pins combine to create a very flexible design platform for a product family or multiple product families. That kind of leverage allows you to create an assembly line for new-product development with competition-beating time to market.

 

 

 

Here’s The Signal Path’s YouTube video review of the Rohde & Schwarz 4.0GHz Spectrum Rider FPH:

 

 

 

 

 

For more information about the Spectrum Rider FPH and the Scope Rider, please contact Rohde & Schwarz directly.

 

 

 

There’s a new tutorial on Digilent’s Web site that tells you how to use its Digital Discovery high-speed logic analyzer and pattern generator to get a detailed look at the boot sequence of a Zynq Z-7000 SoC by monitoring the SPI interface between the Zynq SoC and its SPI-attached QSPI boot ROM. You only need seven connections. The tutorial uses a custom interpreter script for the Digital Discovery analyzer to decode the SPI traffic. The script is installed in the analyzer’s PC-based control software called Waveforms and the tutorial page gives you the script.

 

The entire boot transfer sequence takes 700msec and the entire boot-sequence acquisition consumes a lot of memory: 268,435,456 samples in this case. The Digital Discovery doesn’t store that many samples—it doesn’t need to do so because it sends the acquired data to the attached PC over the connecting USB cable.

 

 

 

Digilent Digital Discovery captures Zynq Boot Sequence over SPI.jpg 

 

 

Digilent’s Digital Discovery logic analyzer captures the entire boot sequence for a Zynq Z-7000 SoC over SPI

 

 

 

There’s nothing particularly unusual about a logic analyzer capturing a processor’s boot sequence. However in this case, Digilent’s Digital Discovery is based on a Xilinx Spartan-6 LX25 FPGA (see “$199.99 Digital Discovery from Digilent implements 800Msample/sec logic analyzer, pattern generator. Powered by Spartan-6”) and it’s monitoring the boot sequence of a Xilinx Zynq SoC. Good tools all around.

 

 

 

If you need a ready-to-plug module for building big systems, take a look at VadaTech’s new AMC580 FPGA Carrier module, which features a Xilinx Zynq UltraScale+ ZU19EG MPSoC connected to 8Gbytes of 64-bit DDR4 SDRAM with ECC, 64Gbytes of Flash memory plus 128Mbytes of boot Flash memory, and SD card slot for even more Flash storage, and two FMC connector sites. The Zynq UltraScale+ MPSoC interfaces directly to the AMC FCLKA, TCLKA-D, FMC DP0-9, and all FMC LA/HA/HB pairs.

 

The Zynq UltraScale+ ZU19EG MPSoC incorporates a tremendous number of on-chip system resources for your system design including four 64-bit, Arm Cortex-A53 application processors running as fast as 1.3GHz; two 32-bit Arm Cortex-R5 lockstep-capable, real-time processors running as fast as 533MHz; an Arm Mali-400 MP2 GPU running as fast as 667MHz; and 1143 system logic cells, 1968 DSP48E2 slices, 34.6Mbits of BRAM, 36Mbits of UltraRAM, four 100G Ethernet MACs, and four 150G Interlaken ports.

 

Here’s a photo of VadaTech’s new AMC580 FPGA Carrier module:

 

 

VadaTech AMC580.jpg 

 

The VadaTech AMC580 FPGA Carrier module incorporates a Zynq UltraScale+ ZU19EG MPSoC

 

 

 

 

And here’s a block diagram of the AMC580 module:

 

 

 

VadaTech AMC580 Block Diagram.jpg 

 

VadaTech AMC580 FPGA Carrier module block diagram

 

 

 

VadaTech provides reference VHDL developed using the Xilinx Vivado Design Suite for testing basic hardware functionality along with a Linux BSP, build scripts, and device drivers for the AMC580.

 

 

Please contact VadaTech directly for more information about the AMC580 FPGA Carrier module.

 

 

 

RedZone Robotics’ Solo—a camera-equipped, autonomous sewer-inspection robot—gives operators a detailed, illuminated view of the inside of a sewer pipe by crawling the length of the pipe and recording video of the conditions it finds inside. A crew can deploy a Solo robot in less than 15 minutes and then move to another site to launch yet another Solo robot, thus conducting several inspections simultaneously and cutting the cost per inspection. The treaded robot traverses the pipeline autonomously and then returns to the launch point for retrieval. If the robot encounters an obstruction or blockage, it attempts to negotiate the problem three times before aborting the inspection and returning to its entry point. The robot fits into pipes as small as eight inches in diameter and even operates in pipes that contain some residual waste water.

 

 

 

RedZone Robotics Solo Sewer Inspection Robot.jpg

 

RedZone Robotics Autonomous Sewer-Pipe Inspection Robot

 

 

 

Justin Starr, RedZone’s VP of Technology, says that the Solo inspection robot uses its on-board Spartan FPGA for image processing and for AI. Image-processing algorithms compensate for lens aberrations and also perform a level of sensor fusion for the robot’s multiple sensors. “Crucial” AI routines in the Spartan FPGA help the robot keep track of where it is in the pipeline and tell the robot what to do when it encounters an obstruction.

 

Starr also says that RedZone is already evaluating Xilinx Zynq devices to extend the robot’s capabilities. “It’s not enough for the Solo to just grab information about what it sees, but let’s actually look at those images. Let’s have the solo go through that inspection data in real time and generate a preliminary report of what it saw. It used to be the stuff of science fiction but now it’s becoming reality.”

 

Want to see the Solo in action? Here’s a 3-minute video:

 

 

 

 

 

 

 

 

 

WP460, “Reducing System BOM Cost with Xilinx's Cost-Optimized Portfolio,” was recently updated. This White Paper contains a wealth of helpful information if you’re trying to cut manufacturing costs in your system. Updates to the White Paper include new information about new Spartan-7 FPGA family and Zynq Z-7000S SoCs with single-core Arm Cortex-A9 processors that run at clock rates as fast as 766MHz. Both of these Xilinx device families help drive BOM costs down while still delivering plenty of system performance—just what you’d expect from Xilinx All Programmable devices.

 

The White Paper also contains additional information about new MicroBlaze soft microprocessor presets, which you can read about in the new MicroBlaze Quick Start Guide. You can drop a 32-bit RISC MicroBlaze soft processor into any current Xilinx device very efficiently. The microcontroller preset version of the MicroBlaze processor consumes fewer than 2000 logic cells, yet runs at a much higher clock rate than most low-cost microcontrollers. There’s plenty of performance there when you need it.

 

 

 

 

Nutaq has just posted information about an intense demo where four of the company’s PicoSDR 8x8 7MHz-6GHz software-defined radio systems—based on Xilinx Virtex-6 FPGAs—are ganged to create a 32-antenna, massive-MIMO basestation that can communicate wirelessly with six UEs (user equipment systems) simultaneously. The UEs are simulated using three Xilinx ZC702 eval kits based on Zynq Z-7020 SoCs.

 

 

 

Nutaq 32-antenna massive-MIMO array.jpg

 

Nutaq’s 32-antenna massive-MIMO array

 

 

 

Here’s a Nutaq video of the demo:

 

 

 

 

 

 

All of the signal processing in this demo is performed on a laptop PC using Mathworks’ MATLAB, which generates waveforms for transmission by the simulated UEs and decodes received signals from the PicoSDR 8x8 receiver. As explained in the video, the transmission waveforms are downloaded to the BRAMs in the Zynq SoCs on the ZC706 boards, wirelessly transmitted to the massive-MIMO receiving antenna, captured by the PicoSDR 8x8 systems, and then sent back to MATLAB for decoding into separate UE constellations.

 

For more information about this demo and the PicoSDR 8x8 systems, contact Nutaq directly.

 

For more information in Xcell Daily, see “Nutaq LTE demo shows FPGA-based PicoSDR 8x8 working with Matlab LTE System Toolbox running 16-element MIMO.”

 

 

 

Here’s a hot-off-the-camera, 3-minute video showing a demonstration of two ZCU106 dev boards based on the Xilinx Zynq UltraScale+ ZU7EV MPSoCs with integrated H.265 hardware encoders and decoders. The first ZCU106 board in this demo processes an input stream from a 4K MIPI video camera by encoding it, packetizing it, and then transmitting it over a GigE connection to the second board, which depacketizes, decodes, and displays the video stream on a 4K monitor. Simultaneously, the second board performs the same encoding, packetizing, and transmission of another video stream from a second 4K MIPI camera to the first ZCU106 board, which displays the second video stream on another 4K display.

 

Note that the integrated H.265 hardware codecs in the Zynq UltraScale+ ZU7EV MPSoC can handle as many as eight simultaneous video streams in both directions.

 

Here’s the short video demo of this system in action:

 

 

 

 

For more information about the ZCU106 dev board and the Zynq UltraScale+ EV MPSoCs, contact your friendly, neighborhood Xilinx or Avnet sales representative.

 

 

 

By Adam Taylor

 

One ongoing area we have been examining is image processing. We’ve look at the algorithms and how to capture images from different sources. A few weeks ago, we looked at the different methods we could use to receive HDMI data and followed up with an example using an external CODEC (P1 & P2). In this blog we are going to look at using internal IP cores to receive HDMI images in conjunction with the Analog Devices AD8195 HDMI buffer, which equalizes the line. Equalization is critical when using long HDMI cables.

 

 

Image1.jpg 

 

Nexys board, FMC HDMI and the Digilent PYNQ-Z1

 

 

 

To do this I will be using the Digilent FMC HDMI card, which provisions one of its channels with an AD8195. The AD8195I on the FMC HDMI card needs a 3v3 supply, which is not available on the ZedBoard unless I break out my soldering iron. Instead, I broke out my Digilent Nexys Video trainer board, which comes fitted with an Artix-7 FPGA and an FMC connector. This board has built-in support for HDMI RX and TX but the HDMI RX path on this board supports only 1m of HDMI cable while the AD8195 on the FMC HDMI card supports cable runs of up to 20m—far more useful in many distributed applications. So we’ll add the FMC HDMI card.

 

First, I instantiated a MicroBlaze soft microprocessor system in the Nexys Video card’s Artix-7 FPGA to control the simple image-processing chain needed for this example. Of course, you can implement the same approach to the logic design that I outline here using a Xilinx Zynq SoC or Zynq UltraScale+ MPSoC. The Zynq PS simply replaces the MicroBlaze.

 

 The hardware design we need to build this system is:

 

  • MicroBlaze controller with local memory, AXI UART, MicroBlaze Interrupt controller, and DDR Memory Interface Generator.
  • DVI2RGB IP core to receive the HDMI signals and convert them to a parallel video format.
  • Video Timing Controller, configured for detection.
  • ILA connected between the VTC and the DVI2RBG cores, used for verification.
  • Clock Wizard used to generate a 200MHz clock, which supplies the DDR MIG and DVI2RGB cores. All other cores are clocked by the MIG UI clock output.
  • Two 3-bit GPIO modules. The first module will set the VADJ to 3v3 on the HDMI FMC. The second module enables the ADV8195 and provides the hot-plug detection.

 

 

Image2.jpg 

 

 

 

The final step in this hardware build is to map the interface pins from the AD8195 to the FPGA’s I/O pins through the FMC connector. We’ll use the TMDS_33 SelectIO standard for the HDMI clock and data lanes.

 

Once the hardware is built, we need to write some simple software to perform the following:

 

 

  • Disable the VADJ regulator using pin 2 on the first GPIO port.
  • Set the desired output voltage on VADJ using pins 0 & 1 on the first GPIO port.
  • Enable the VADJ regulator using pin 2 on the first GPIO port.
  • Enable the AD8195 using pin 0 on the second GPIO port.
  • Enable pre- equalization using pin 1 on the second GPIO port.
  • Assert the Hot-Plug Detection signal using pin 2 on the second GPIO port.
  • Read the registers within the VTC to report the modes and status of the video received.

 

 

To test this system, I used a Digilent PYNQ-Z1 board to generate different video modes. The first step in verifying that this interface is working is to use the ILA to check that the pixel clock is received and that its DLL is locked, along with generating horizontal and vertical sync signals and the correct pixel values.

 

Provided the sync signals and pixel clock are present, the VTC will be able to detect and classify the video mode. The application software will then report the detected mode via the terminal window.

 

 

Image3.jpg

 

ILA Connected to the DVI to RGB core monitoring its output

 

 

 

Image4.jpg 

 

 

Software running on the Nexys Video detecting SVGA mode (600 pixels by 800 lines)

 

 

 

With the correct video mode being detected by the VTC, we can now configure a VDMA write channel to move the image from the logic into a DDR frame buffer.

 

 

You can find the project on GitHub

 

 

If you are working with video applications you should also read these:

 

 

PL to PS using VDMA

What to do if you have VDMA issues  

Creating a MicroBlaze System Video

Writing MicroBlaze Software  

 

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

First Year E Book here

First Year Hardback here.

 

 

 

MicroZed Chronicles hardcopy.jpg  

 

 

 

Second Year E Book here

Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

Avnet’s MiniZed SpeedWay Design Workshops are designed to help you jump-start your embedded design capabilities using Xilinx Zynq Z-7000S All Programmable SoCs, which meld a processing system based on a single-core, 32-bit, 766MHz Arm Cortex-A9 processor with plenty of Xilinx FPGA fabric. Zynq SoCs are just the thing when you need to design high-performance embedded systems or need to use a processor along with some high-speed programmable logic. Even better, these Avnet workshops focus on using the Avnet MiniZed—a compact, $89 dev board packed with huge capabilities including built-in WiFi and Bluetooth wireless connectivity. (For more information about the Avnet MiniZed dev board, see “Avnet’s $89 MiniZed dev board based on Zynq Z-7000S SoC includes WiFi, Bluetooth, Arduino—and SDSoC! Ships in July.”)

 

These workshops start in November and run through March of next year and there are four full-day workshops in the series:

 

  • Developing Zynq Software
  • Developing Zynq Hardware
  • Integrating Sensors on MiniZed with PetaLinux
  • A Practical Guide to Getting Started with Xilinx SDSoC

 

You can mix and match the workshops to meet your educational requirements. Here’s how Avnet presents the workshop sequence:

 

 

Avnet MiniZed Workshops.jpg 

 

 

 

These workshops are taking place in cities all over North America including Austin, Dallas, Chicago, Montreal, Seattle, and San Jose, CA. All cities will host the first two workshops. Montreal and San Jose will host all four workshops.

 

A schedule for workshops in other countries has yet to be announced. The Web page says “Coming soon” so please contact Avnet directly for more information.

 

Finally, here’s a 1-minute YouTube video with more information about the workshops

 

 

 

 

For more information on and to register for the Avnet MiniZed SpeedWay Design Workshops, click here.

 

 

The Vivado 2017.3 HLx Editions are now available and the Vivado 2017.3 Release Notes (UG973) tells you why you’ll want to download this latest version now. I’ve scanned UG973, watched the companion 20-minute Quick Take video, and cherry-picked twenty of the many enhancements that jumped out at me to help make your decision easier:

 

 

  • Reduced compilation time with a new incremental compilation capability

 

  • Improved heuristics to automatically choose between high-reuse and low-reuse modes for incremental compilation

 

  • Verification IP (VIP) now included as part of pre-compiled IP libraries including support for AXI-Stream VIP

 

  • Enhanced ability to integrate RTL designs into IP Integrator using drag-and-drop operations. No need to run packager any more.

 

  • Checks put in place to ensure that IP is available when invoking write_bd_tcl command

 

  • write_project_tcl command now includes Block designs if they are part of the project

 

  • Hard 100GE Subsystem awareness for the VCU118 UltraScale+ Board with added assistance support

 

  • Hard Interlaken Subsystem awareness for the VCU118 UltraScale+ Board with assistance support

 

  • Support added for ZCU106 and VCU118 production reference boards

 

  • FMC Support added to ZCU102 and ZCU106 reference boards

 

  • Bus skew reports (report_bus_skew) from static timing analysis now available in the Vivado IDE

 

  • Enhanced ease of use for debug over PCIe using XVC

 

  • Partial Reconfiguration (PR) flow support for all UltraScale+ devices in production

 

  • Support for optional flags (FIFO Almost Full, FIFO Almost Empty, Read Data Valid) in XPM FIFO

 

  • Support for different read and write widths while using Byte Write Enable in XPM Memory

 

  • New Avalon to AXI bridge IP

 

  • New USXGMII Subsystem that switches 10M/100M/1G/2.5G/5G/10G on 10GBASE-R 10G GT line rate for NBASE-T standard

 

  • New TSN (Time-Sensitive Network) subsystem

 

  • Simplified interface for DDR configuration in the Processing Systems Configuration Wizard

 

  • Fractional Clock support for DisplayPort Audio and Video to reduce BOM costs

 

 

 

 

Exactly a week ago, Xilinx introduced the Zynq UltraScale+ RFSoC family, which is a new series of Zynq UltraScale+ MPSoCs with RF ADCs and DACs and SD-FECs added. (See “Zynq UltraScale+ RFSoC: All the processing power of 64- and 32-bit ARM cores, programmable logic plus RF ADCs, DACs.”) This past Friday at the Xilinx Showcase held in Longmont, Colorado, Senior Marketing Engineer Lee Hansen demonstrated a Zynq UltraScale+ ZU28DR RFSoC with eight 12-bit, 4Gsamples/sec RF ADCs, eight 14-bit, 6.4Gsamples/sec RF DACs, and eight SD-FECs connected through an appropriate interface to National Instruments’ LabVIEW Systems Engineering Development Environment.

 

The demo system was generating signals using the RF DACs, receiving the signals using the RF ADCs, and then displaying the resulting signal spectrum using LabVIEW.

 

Here’s a 3-minute video of the demo:

 

 

 

 

 

Over the past weekend, Xilinx held a Showcase and PYNQ Hackathon in its Summit Retreat Center in Longmont, Colorado. About 100 people from tech companies all over Colorado attended the Showcase and twelve teams—about 40 people including students from local universities and engineers from industry—competed in the Hackathon.

 

Here are a few images from the Xilinx Showcase and PYNQ Hackathon:

 

 

 

 Xilinx Longmont Summit Retreat Center.jpg

 

Xilinx Summit Retreat Center in Longmont, Colorado

 

 

 

Showcase 3.jpg

 

Xilinx CTO Ivo Bolsens welcomes everyone to the Xilinx Showcase

 

 

 

Showcase 1.jpg

 

Xilinx Showcase Attendees

 

 

 

Showcase 2.jpg 

 

More Xilinx Showcase Attendees

 

 

 

Dans Welcome to the Hackathon.jpg

 

Xilinx VP of Interactive Design Tools and Xilinx Longmont Site Manager Dan Gibbons Welcomes the Hackers to the PYNQ Hackathon

 

 

 

Pizza Dinner.jpg

 

Abo’s Pizza (Boulder’s Finest) for Friday Night Dinner

 

 

 

Handing out Pynqs.jpg

 

Handing out Digilent PYNQ-Z1 boards

 

 

 

Work Begins 1.jpg

 

The PYNQ Hackathon Work Begins

 

 

 

A little advice.jpg

 

Giving a little help to the participants

 

 

 

Work Begins 2.jpg

 

Focus, Focus, Focus

 

 

 

 Logictools Mini Lecture.jpg

 

A Mini Lecture about the PYNQ Logictools Overlay for Hackathon Attendees

 

 

 

 

 

 Catching some sleep.jpg

 

Getting a little shuteye

 

 

 

Longs Peak.jpg

 

A view of Longs Peak from the Xilinx Longmont Summit Retreat Center

 

 

 

 

The event was organized by an internal Xilinx team including:

 

  • Craig Abramson
  • Mindy Brooks
  • Derek Curd
  • Greg Daughtry
  • Brad Fross
  • Adrian Hernandez
  • Graham Schelle

 

 

In addition, a team of helpers was on hand over the 30-hour duration of the event to answer questions:

 

  • Kv Thanjavur Bhaaskar
  • Anurag Dubey
  • Vikhyat Goyal
  • David Gronlund
  • Dustin Richmond
  • Naveen Purushotham
  • Rock Qu

 

 

For more information about the PYNQ Hackathon, see “12 PYNQ Hackathon teams competed for 30 hours, inventing remote-controlled robots, image recognizers, and an air keyboard.”

 

 

For more information about the Python-based, open-source PYNQ development environment and the Zynq-based Digilent PYNQ-Z1 dev board, see “Python + Zynq = PYNQ, which runs on Digilent’s new $229 pink PYNQ-Z1 Python Productivity Package.”

 

 

 

 

 

By Adam Taylor

 

 

We really need an operating system to harness the capabilities of the two or four 64-bit Arm Cortex-A53 processors cores in the Zynq UltraScale+ MPSoC APU (Application Processing Unit). An operating system enables effective APU use by providing SMP (Symmetric Multi-Processing), multitasking, networking, security, memory management, and file system capabilities. Immediate availability of these resources saves us considerable coding and debugging time and allows us to focus on developing the actual application. Putting it succinctly: don’t reinvent the wheel when it comes to operating systems.

 

 

 

Image1.jpg 

 

Avnet UltraZed-EG SOM plugged into an IOCC (I/O Carrier Card)

 

 

 

PetaLinux is one of the many operating systems that run on the Zynq UltraScale+ MPSoC’s APU. I am going to focus this blog on what we need to do to get PetaLinux up and running on the Zynq UltraScale+ MPSoC using the Avnet UltraZed-EG SoM. However, the process is the same for any design.

 

To do this we will need:

 

 

  • A Zynq UltraScale+ MPSoC Design: For this example, I will use the design we created last week
  • A Linux machine or Linux Virtual Machine
  • PetaLinux and the Xilinx software command-line tool chain installed to configure the PetaLinux OS and perform the build

 

 

To ensure that it installs properly, do not use root to install PetaLinux. In other words, do not use the sudo command. If you want to install PetaLinux in the /opt/pkg directory as recommended in the installation guide, you must first change the directory permissions for your user account.  Alternatively, you can install PetaLinux in your home area, which is exactly what I did.

 

With PetaLinux installed, run the settings script in a terminal window (source settings.sh), which sets the environment variable allowing us to call PetaLinux commands.

 

When we build PetaLinux, we get a complete solution. The build process creates the Linux image, he device tree blob, and a RAM disk combined into a single FIT image. PetaLinux also generates the PMU firmware, the FSBL (First Stage Boot Loader), and U-boot executables needed to create the boot.bin.

 

We need to perform the following steps to create the FIT image and boot files:

 

 

  • Export the Vivado hardware definition. This will export the hardware definition file under the directory <project_name.sdk> within the Vivado project

 

  • Create a new PetaLinux project. We are creating a design for a custom board (i.e. there is no PetaLinux BSP), so we will use the command petalinux-create and use the zynqMP template:

 

 

petalinux-create --type project --template zynqMP --name part219

 

 

  • Import the hardware definition from Vivado to configure the PetaLinux project. This will provide not only the bit file and HDF but will be used to create the device trees

 

petalinux-config --get-hw-description=<path-to-vivado-hardware-description-file>

 

 

 

  • This will open a petalinux configuration menu; you should review the Linux kernel and U-Boot settings. For the basic build in this example we do not need to change anything.

 

 

 

Image2.jpg

 

Petalinux configuration page presented once the hardware definition is imported

 

 

 

 

  • If desired you can also review the configuration of constituent parts of u-boot, PMUFW, and Device Tree or RAMFS by using the command:

 

Petalinux-config -c <u-boot or PMUFW or device-tree or rootfs>

 

 

  • Build the PetaLinux kernel using the command:

 

 

Petalinux-build

 

 

This might take some time.

 

 

  • Create the boot.bin file.

 

 

petalinux-package --boot --fsbl zynqmp_fsbl.elf --u-boot u-boot.elf --pmufw  pmufw.elf –fpga fpga.bit

 

 

 

Once we have created the image file and the bin file, we can copy them to a SD card and boot the UltraZed-EG SOM.

 

Just as we simulate our logic designs first, we can test the PetaLinux image within our Linux build environment using QEMU. This allows us to verify that the image we have created will load correctly.

 

We run QEMU by issuing the following command:

 

 

petalinux-boot --qemu --image < Linux-image-file>

 

 

 

Image3.jpg

 

Created Petalinux image running in QEMU

 

 

 

Once we can see the PetaLinux image booting correctly in QEMU, the next step is to try it on the UltraZed-EG SOM. Copy the image.ub and boot.bin files to an SD card, configure the UltraZed-EG SOM mounted on the IOCC (I/O Carrier Card) to boot from the SD card, insert the SD card, and apply power.

 

If everything has been done correctly, you should see the FSBL load the PMU firmware in the terminal window. Then, U-Boot should run and load the Linux kernel.

 

 

 

Image4.jpg 

 

Linux running on the UltraZed-EG SOM

 

 

Once the boot process has completed, we can log in using the user name and password of root, and then begin using our PetaLinux environment.

 

Now that we have the Zynq UltraScale+ MPSoC’s APU up and running with PetaLinux, we can use OpenAMP to download and execute programs in the Zynq UltraScale+ MPSoC’s dual-core ARM Cortex-R5 RPU (Real-time Processing Unit). We will explore how we do this in another blog.

 

Meanwhile, the following links were helpful in the generation of this image:

 

 

https://www.xilinx.com/support/answers/68390.html

 

https://www.xilinx.com/support/answers/67158.html

 

https://wiki.trenz-electronic.de/display/PD/Petalinux+Troubleshoot#PetalinuxTroubleshoot-Petalinux2016.4

 

https://www.xilinx.com/support/documentation/sw_manuals/xilinx2017_2/ug1156-petalinux-tools-workflow-tutorial.pdf

 

 

 

 

The project, as always, is on GitHub.

 

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

First Year E Book here

First Year Hardback here.

 

 

 

MicroZed Chronicles hardcopy.jpg  

 

 

 

Second Year E Book here

Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

 

Twelve student and industry teams competed for 30 straight hours in the Xilinx Hackathon 2017 competition over the weekend at the Summit Retreat Center in the Xilinx corporate facility located in Longmont, Colorado. Each team member received a Digilent PYNQ-Z1 dev board, which is based on a Xilinx Zynq Z-7020 SoC, and then used their fertile imaginations to conceive of and develop working code for an application using the open-source, Python-based PYNQ development environment, which is based on self-documenting Jupyter Notebooks. The online electronics and maker retailer Sparkfun, located just down the street from the Xilinx facility in Longmont, supplied boxes of compatible peripheral boards with sensors and motor controllers to spur the team members’ imaginations. Several of the teams came from local universities including the University of Colorado at Boulder and the Colorado School of Mines in Golden, Colorado. At the end of the competition, eleven of the teams presented their results using their Jupyter Notebooks. Then came the prizes.

 

For the most part, team members had never used the PYNQ-Z1 boards and were not familiar with using programmable logic. In part, that was the intent of the Hackathon—to connect teams of inexperienced developers with appropriate programming tools and see what develops. That’s also the reason that Xilinx developed PYNQ: so that software developers and students could take advantage of the improved embedded performance made possible by the Zynq SoC’s programmable hardware without having to use ASIC-style (HDL) design tools to design hardware (unless they want to do so, of course).

 

Here are the projects developed by the teams, in the order presented during the final hour of the Hackathon (links go straight to the teams’ Github repositories with their Jupyter notebooks that document the projects with explanations and “working” code):

 

 

  • Team “from timemachine import timetravel” developed a sine wave generator with a PYNQ-callable frequency modulator and an audio spectrum analyzer. Time permitted the team to develop a couple of different versions of the spectrum analyzer but not enough time to link the generator and analyzer together.

 

  • Team “John Cena” developed a voice-controlled mobile robot. An application on a PC captured the WAV file for a spoken command sequence and this file was then wirelessly transmitted to the mobile robot, which interpreted commands and executed them.

 

 

Team John Cena Voice-Controlled Mobile Robot.jpg

 

Team John Cena’s Voice-Controlled Mobile Robot

 

 

 

  • Inspired by the recent Nobel Physics prize given to the 3-person team that architected the LIGO gravity-wave observatories, Team “Daedalus” developed a Hackathon entry called “Sonic LIGO”—a sound localizer that takes audio captured by multiple microphones, uses time correlation to filter audio noise from the sounds of interest, and then triangulates the location of the sound using its phase derived from each microphone. Examples of sound events the team wanted to locate included hand claps and gun shots. The team planned to use its members’ three PYNQ-Z1 boards for the triangulation.

 

  • Team “Questionable” from the Colorado School of Mines developed an automated parking lot assistant to aid students looking for a parking space near the university. The design uses two motion detectors to detect cars passing through each lot’s entrances and exits. Timing between the two sensors determines whether the car is entering or leaving the lot. The team calls their application PARQYNG and produced a YouTube video to explain the idea:

 

 

 

 

 

  • Team “Snapback” developed a Webcam-equipped cap that captures happy moments by recognizing smiling faces and using that recognition to trigger the capture of a short video clip, which is then wirelessly uploaded to the cloud for later viewing. This application was inspired by the oncoming memory loss of one of the team members’ grandmother.

 

  • Team “Trimble” from Trimble, Inc. developed a sophisticated photogrammetric application for determining position using photogrammetry techniques. The design uses the Zynq SoC’s programmable logic to accelerate the calculations.

 

  • Team “Codeing Crazy” developed an “air keyboard” (it’s like a working air guitar but it’s a keyboard) using OpenCV to recognize the image of a hand in space, locate the recognized object in a space that’s predefined as a keyboard, and then playing the appropriate note.

 

  • Team “Joy of Pink” from CU Boulder developed a real-time emoji generator that recognizes facial expressions in an image, interprets the emotion shown on the subject’s face by sending the image to Microsoft’s cloud-based Azure Emotion API, and then substituting the appropriate emoji in the image.

 

 

Team Joy of Pink Emoji Generator.jpg

 

Team “Joy of Pink” developed an emoji generator based on facial interpretation on Microsoft’s cloud-based Azure Emotion API

 

 

 

  • Team “Harsh Constraints” plunged headlong into a Verilog-based project to develop a 144MHz LVDS Cameralink interface to a thermal camera. It was a very ambitious venture for a team that had never before used Verilog.

 

 

  • Team “Caffeine” developed a tone-controlled robot using audio filters instantiated in the Zynq SoC’s programmable logic to decode four audio tones which then control robot motion. Here’s a block diagram:

 

 

Team Caffeine Audio Fiend Machine.jpg

 

 

Team Caffeine’s Audio Fiend Tone-Based Robotic Controller

 

 

 

  • Team “Lynx” developed a face-recognition system that stores faces in the cloud in a spreadsheet on a Google drive based on whether or not the system has seen that face before. The system uses Haar-Cascade detection written in OpenCV.

 

 

After the presentations, the judges deliberated for a few minutes using multiple predefined criteria and then awarded the following prizes:

 

 

  • The “Murphy’s Law” prize for dealing with insurmountable circumstances went to Team Harsh Constraints.

 

  • The “Best Use of Programmable Logic” prize went to Team Caffeine.

 

  • The “Runner Up” prize went to Team Snapback.

 

  • The “Grand Prize” went to Team Questionable.

 

 

Congratulations to the winners and to all of the teams who spent 30 hours with each other in a large room in Colorado to experience the joy of hacking code to tackle some tough problems. (A follow-up blog will include a photographic record of the event so that you can see what it was like.)

 

 

 

For more information about the PYNQ development environment and the Digilent PYNQ-Z1 board, see “Python + Zynq = PYNQ, which runs on Digilent’s new $229 pink PYNQ-Z1 Python Productivity Package.”

 

 

 

LightReading has just posted a 5-minute video interview with Kirk Saban (Xilinx’s Senior Director for FPGA and SoC Product Management and Marketing) discussing some of the aspects of the newly announced Zynq UltraScale+ RFSoCs with on-chip RF ADCs and DACs. These devices are going to revolutionize the design of all sorts of high-end equipment that must deal with high-speed analog signals in markets as diverse as 5G communications, broadband cable, test and measurement, and aerospace/defense.

 

Here’s the video:

 

 

 

 

 

 

For more information about the new Zynq UltraScale+ RFSoC family, see “Zynq UltraScale+ RFSoC: All the processing power of 64- and 32-bit ARM cores, programmable logic plus RF ADCs, DACs.”

 

 

 

Voice-controlled systems are suddenly a thing thanks to Amazon’s Alexa and Google Home. But how do you get reliable, far-field voice recognition and robust voice recognition in the presence of noise? That’s the question being answered by Aaware with its $199 Far-Field Development Platform. This system couples as many as 13 MEMS microphones (you can use fewer in a 1D linear or 2D array) with a Xilinx Zynq Z-7010 SoC to pre-filter incoming voice, delivering a clean voice data stream to local or cloud-based voice recognition hardware. The system has a built-in wake word (like “Alexa” or “OK, Google”) that triggers the unit’s filtering algorithms.

 

Here’s a video showing you the Aaware Far-Field Development Platform in action:

 

 

 

 

 

  

Aaware’s technology makes significant use of the Zynq Z-7010 SoC’s programmable-logic and DSP processing capabilities to implement and accelerate the company’s sound-capture technologies including:

 

 

  • Noise and echo cancellation
  • Source detection and localization

 

 

You’ll find more technology details for the Aaware Far-Field Development Platform here.

 

 

Please contact Aaware directly for more information.

 

 

 

 

 

Earlier today, Xilinx formally announced delivery of the first Zynq UltraScale+ RFSoCs. (See “Zynq UltraScale+ RFSoC: All the processing power of 64- and 32-bit ARM cores, programmable logic plus RF ADCs, DACs.”) Now there’s a specification-packed video showing one of these devices in action. It’s only four minutes long, but you’ll probably need to view it at least twice to unpack everything you’ll see in terms of RF processing, ADC and DAC performance. Better strap yourself in for the ride.

 

 

 

 

For more information about the Zynq UltraScale+ RFSoC device family, please contact your friendly neighborhood Xilinx sales representative or Avnet sales representative.

 

 

You say you want a revolution

Well, you know

We all want to change the world

 

--“Revolution” written by John Lennon and Paul McCartney

 

 

Today, Xilinx announced five new Zynq UltraScale+ RFSoC devices with all of the things you expect in a Xilinx Zynq UltraScale+ SoC—a 4-core APU with 64-bit ARM Cortex A-53 processor cores, a 2-core RPU with two 32-bit ARM Cortex-R5 processors, and ultra-fast UltraScale+ programmable logic—with revolutionary new additions: 12-bit, RF-class ADCs, 14-bit, RF-class DACs, and integrated SD-FEC (Soft Decision Forward Error Correction) cores.

 

Just in case you missed the plurals in that last sentence, it’s not one but multiple RF ADCs and DACs.

 

That means you can bring RF analog signals directly into these chips, process those signals using high-speed programmable logic along with thousands of DSP48E2 slices, and then output processed RF analog signals—using the same device to do everything. In addition, if you’re decoding and/or encoding data, two of the announced Zynq UltraScale+ RFSoC family members incorporate SD-FEC IP cores that support LDPC coding/decoding and Turbo decoding for applications including 5G wireless communications, backhaul, DOCSIS, and LTE. If you’re dealing with RF signals and high-speed communications, you know just how revolutionary these parts are.

 

For everyone else not accustomed to handling RF analog signals… well you’ll just have to take my word for it. These devices are revolutionary.

 

Here’s a conceptual block diagram of a Zynq UltraScale+ RFSoC:

 

 

 

RFSoC Block Diagram.jpg 

 

 

Here’s a table that shows you some of the resources available on the new Zynq UltraScale+ RFSoC devices:

 

 

RFSoC Product Table.jpg 

 

 

As you can see from the table, you can get as many as eight 12-bit, 4Gsamples/sec ADCs or sixteen 12-bit, 2Gsamples/sec ADCs on one device. You can also get eight or sixteen 14-bit, 6.4Gsamples/sec DACs on the same device. Two of the Zynq UltraScale+ RFSoCs also incorporate eight SD-FECs. In addition, there are plenty of logic cells, DSP slices, and RAM on these devices to build just about anything you can imagine. (With my instrumentation background, I can imagine new classes of DSOs and VSTs (Vector Signal Transceivers), for example.)

 

You get numerous benefits by basing your design on a Zynq UltraScale+ RFSoC device. The first and most obvious is real estate. Putting the ADCs, DACs, processors, programmable logic, DSPs, memory, and programmable I/O on one device saves a tremendous amount of board space and means you won’t be running high-speed traces across the pcb to hook all these blocks together.

 

Next, you save the complexity of dealing with high-speed converter interfaces like JESD204B/C. The analog converters are already interfaced to the processors and logic inside of the device. Done. Debugged. Finished.

 

You also save the power associated with those high-speed interfaces. That alone can amount to several watts of power savings. These benefits are reviewed in a new White Paper titled “All Programmable RF-Sampling Solutions.

 

There’s a device family overview here.

 

And just one more thing. Today’s announcement didn’t just announce the Zynq UltraScale+ RFSoC device family. The announcement headline included one more, very important word: “Delivers.”

 

 

As in shipped.

 

 

Because Xilinx doesn’t just announce parts. We deliver them too.

 

 

For more information about the Zynq UltraScale+ RFSoC device family, please contact your friendly neighborhood Xilinx sales representative or Avnet sales representative.

 

 

 

By Adam Taylor

 

The Xilinx Zynq UltraScale+ MPSoC is good for many applications including embedded vision. It’s APU with two or four 64-bit ARM Cortex-A53 processors, Mali GPU, DisplayPort interface, and on-chip programmable logic (PL) give the Zynq UltraScale+ MPSoC plenty of processing power to address exciting applications such as ADAS and vision-guided robotics with relative ease. Further, we can use the device’s PL and its programmable I/O to interface with a range of vision and video standards including MIPI, LVDS, parallel, VoSPI, etc. When it comes to interfacing image sensors, the Zynq UltraScale+ MPSoC can handle just about anything you throw at it.

 

Once we’ve brought the image into the Zynq UltraScale+ MPSoC’s PL, we can implement an image-processing pipeline using existing IP cores from the Xilinx library or we can develop our own custom IP cores using Vivado HLS (high-level synthesis). However, for many applications we’ll need to move the images into the device’s PS (processing system) domain before we can apply exciting application-level algorithms such as decision making or use the Xilinx reVISION acceleration stack.

 

 

 

Image1.jpg 

 

The original MicroZed Evaluation kit and UltraZed board used for this demo

 

 

 

I thought I would kick off the fourth year of this blog with a look at how we can use VDMA instantiated in the Zynq MPSoC’s PL to transfer images from the PL to the PS-attached DDR Memory without processor intervention. You often need to make such high-speed background transfers in a variety of applications.

 

To do this we will use the following IP blocks:

 

  • Zynq MPSoC core – Configured to enable both a Full Power Domain (FPD) AXI HP Master and FPD HPC AXI Slave, along with providing at least one PL clock and reset to the PL fabric.
  • VDMA core – Configured for write only operations, No FSync option and with a Genlock Mode of master
  • Test Pattern Generator (TPG) – Configurable over the AXI Lite interface
  • AXI Interconnects – Implement the Master and Slave AXI networks

 

 

Once configured over its AXI Lite interface, the Test Pattern Generator outputs test patterns which are then transferred into the PS-attached DDR memory. We can demonstrate that this has been successful by examining the memory locations using SDK.

 

 

Image2.jpg 

 

Enabling the FPD Master and Slave Interfaces

 

 

 

For this simple example, we’ll clock both the AXI networks at the same frequency, driven by PL_CLK_0 at 100MHz.

 

For a deployed system, an image sensor would replace the TPG as the image source and we would need to ensure that the VDMA input-channel clocks (Slave-to-Memory-Map and Memory-Map-to-Slave) were fast enough to support the required pixel and frame rate.  For example, a sensor with a resolution of 1280 pixels by 1024 lines running at 60 frames per second would require a clock rate of at least 108MHz. We would need to adjust the clock frequency accordingly.

 

 

 

Image3.jpg

 

Block Diagram of the completed design

 

 

 

To aid visibility within this example, I have included three ILA modules, which are connected to the outputs of the Test Pattern Generator, AXI VDMA, and the Slave Memory Interconnect. Adding these modules enables the use of Vivado’s hardware manager to verify that the software has correctly configured the TPG and the VDMA to transfer the images.

 

With the Vivado design complete and built, creating the application software to configure the TPG and VDMA to generate images and move them from the PL to the PS is very straightforward. We use the AXIVDMA, V_TPG, Video Common APIs available under the BSP lib source directory to aid in creating the application. The software itself performs the following:

 

  1. Initialize the TPG and the AXI VDMA for use in the software application
  2. Configure the TPG to generate a test pattern configured as below
    1. Set the Image Width to 1280, Image Height to 1080
    2. Set the color space to YCRCB, 4:2:2 format
    3. Set the TPG background pattern
    4. Enable the TPG and set it for auto reloading
  3. Configure the VDMA to write data into the PS memory
    1. Set up the VDMA parameters using a variable of the type XAxiVdma_DmaSetup – remember the horizontal size and stride are measured in bytes not pixels.
    2. Configure the VDMA with the setting defined above
    3. Set the VDMA frame store location address in the PS DDR
    4. Start VDMA transfer

The application will then start generating test frames, transferred from the TPG into the PS DDR memory. I disabled the caches for this example to ensure that the DDR memory is updated.

 

Examining the ILAs, you will see the TPG generating frames and the VDMA transferring the stream into memory mapped format:

 

 

 

Image4.jpg

 

TPG output, TUSER indicates start of frame while TLAST indicates end of line

 

 

 

Image5.jpg

 

VDMA Memory Mapped Output to the PS

 

 

 

Examining the frame store memory location within the PS DDR memory using SDK demonstrates that the pixel values are present.

 

 

Image6.jpg 

 

Test Pattern Pixel Values within the PS DDR Memory

 

 

 

 

You can use the same approach in Vivado when creating software for a Zynq Z-7000 SoC iinstead of a Zynq UltraScale+ MPSoC by enabling the AXI GP master for the AXI Lite bus and AXI HP slave for the VDMA channel.

 

Should you be experiencing trouble with your VDMA based image processing chain, you might want to read this blog.

 

 

The project, as always, is on GitHub.

 

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

First Year E Book here

First Year Hardback here.

 

 

 

MicroZed Chronicles hardcopy.jpg  

 

 

 

Second Year E Book here

Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

I joined Xilinx five years ago and have looked for a good, introductory book on FPGA-based design ever since because people have repeatedly asked me for my recommendation. Until now, I could mention but not recommend Max Maxfield’s book published in 2004 titled “The Design Warrior’s Guide to FPGAs”—not because it was bad (it’s excellent) but because it’s more than a decade out of date. Patrick Lysaght, a Senior Director in the Xilinx CTO's office, alerted me to a brand new book that I can now recommend to anyone who wants to learn about using programmable logic to design digital systems.

 

It’s titled “Digital Systems Design with FPGA: Implementation Using Verilog and VHDL” and it was written by Prof. Dr. Cem Ünsalan in the Department of Electrical and Electronics Engineering at Yeditepe University in İstanbul and Dr. Bora Tar, now at the Power Management Research Lab at Ohio State University in Columbus, Ohio. Their book will take you from the basics of digital design and logic into FPGAs; FPGA architecture including programmable logic, block RAM, DSP slices, FPGA clock management, and programmable I/O; hardware description languages with an equal emphasis on Verilog and VHDL; the Xilinx Vivado Design Environment; and then on to IP cores including the Xilinx MicroBlaze and PicoBlaze soft processors. The book ends with 24 advanced embedded design projects. It’s quite obvious that the authors intend that this book be used as a textbook in a college-level digital design class (or two), but I think you could easily use this well-written book for self-directed study as well.

 

 

 

Digital System Design with FPGA Book cover.jpg

 

 

“Digital Systems Design with FPGA: Implementation Using Verilog and VHDL” uses the Xilinx Artix-7 FPGA as a model for describing the various aspects of a modern FPGA and goes on two describe two Digilent development boards based on the Artix-7 FPGA: the $149 Basys3 and the $99 Arty (now called the Arty A7 to differentiate it from the newer Arty S7, based on a Spartan-7 FPGA, and Zynq-based Arty Z7 dev boards). These boards are great for use in introductory design classes and they make powerful, low-cost development boards even for experienced designers.

 

At 400 pages, “Digital Systems Design with FPGA: Implementation Using Verilog and VHDL” is quite comprehensive and so new that the publisher has yet to put the table of contents online, so I decided to resolve that problem by publishing the contents here so that you can see for yourself how comprehensive the book is:

 

 

1 Introduction

1.1 Hardware Description Languages

1.2 FPGA Boards and Software Tools

1.3 Topics to Be Covered in the Book

 

2 Field-Programmable Gate Arrays

2.1 A Brief Introduction to Digital Electronics

2.1.1 Bit Values as Voltage Levels

2.1.2 Transistor as a Switch

2.1.3 Logic Gates from Switches

2.2 FPGA Building Blocks

2.2.1 Layout of the Xilinx Artix-7 XC7A35T FPGA

2.2.2 Input / Output Blocks

2.2.3 Configurable Logic Blocks

2.2.4 Interconnect Resources

2.2.5 Block RAM

2.2.6 DSP Slices

2.2.7 Clock Management

2.2.8 The XADC Block

2.2.9 High-Speed Serial I/O Transceivers

2.2.10 Peripheral Component Interconnect Express Interface

2.3 PPGA-Based Digital System Design Philosophy

2.3.1 How to Think While Using FPGAS

2.3.2 Advantages and Disadvantages of FPGAS

2.4 Usage Areas of FPGAs

2.5 Summary

2.6 Exercises

 

3 Basys3 and Arty FPGA Boards

3.1 The Basys3 Board

3.1.1 Powering the Board

3.1.2 Input / Output

3.1.3 Configuring the FPGA

3.1.4 Advanced Connectors

3.1.5 External Memory

3.1.6 Oscillator / Clock

3.2 The Arty Board

3.2.1 Powering the Board

3.2.2 Input/Output

3.2.3 Configuring the FPGA

3.2.4 Advanced Connectors

3.2.5 External Memory

3.2.6 Oscillator / Clock

3.3 Summary

3.4 Exercises

 

4 The Vivado Design Suite

4.1 Installation and the Welcome Screen

4.2 Creating a New Project

4.2.1 Adding a Verilog File

4.2.2 Adding a VHDL File

4.3 Synthesizing the Project

4.4 Simulating the Project

4.4.1 Adding a Verilog Testbench File

4.4.2 Adding a VHDL Testbench File

4.5 Implementing the Synthesized Project

4.6 Programming the FPGA

4.6.1 Adding the Basys3 Board Constraint File to the Project

4.6.2 Programming the FPGA on the Basys3 Board

4.6.3 Adding the Arty Board Constraint File to the Project

4.6.4 Programming the FPGA on the Arty Board

4.7 Vivado Design Suite IP Management

4.7.1 Existing IP Blocks in Vivado

4.7.2 Generating a Custom IP

4.8 Application on the Vivado Design Suite

4.9 Summary

4.10 Exercises

 

5 Introduction to Verilog and VHDL

5.1 Verilog Fundamentals

5.1.1 Module Representation

5.1.2 Timing and Delays in Modeling

5.1.3 Hierarchical Module Representation

5.2 Testbench Formation in Verilog

5.2.1 Structure of a Verilog Testbench File

5.2.2 Displaying Test Results

5.3 VHDL Fundamentals

5.3.1 Entity and Architecture Representations

5.3.2 Dataflow Modeling

5.3.3 Behavioral Modeling

5.3.4 Timing and Delays in Modeling

5.3.5 Hierarchical Structural Representation

5.4 Testbench Formation in VHDL

5.4.1 Structure of a VHDL Testbench File

5.4.2 Displaying Test Results

5.5 Adding an Existing IP to the Project

5.5.1 Adding an Existmg IP in Verilog

5.5 2 Adding an Existing IP in VHDL

5.6 Summary

5.7 Exercises

 

6 Data Types and Operators

6.1 Number Representations

6.1.1 Binary Numbers

6.1.2 Octal Numbers

6.1.3 Hexadecimal Numbers

6.2 Negative Numbers

6.2.1 Signed Bit Representation

6.2.2 One’s Complement Representation

6.2.3 Two’s Complement Representation

6.3 Fixed- and Floating-Point Representations

6.3.1 Fixed-Point Representation

6.3.2 Floating-Point Representation

6.4 ASCII Code

6.5 Arithmetic Operations on Binary Numbers

6.5.1 Addition

6.5.2 Subtraction

6.5.3 Multiplication

6.5.4 Division

6.6 Data Types in Verilog

6.6.1 Net and Variable Data Types

6.6.2 Data Values

6.6.3 Naming a Net or Variable

6.6.4 Defining Constants and Parameters

6.6.5 Defining Vectors

6.7 Operators in Verilog

6.7.1 Arithmetic Operators

6.7.2 Concatenation and Replication Operators

6.8 Data Types in VHDL

6.8.1 Signal and Variable Data Types

6.8.2 Data Values

6.8.3 Naming a Signal or Variable

6.8.4 Defining Constants

6.8.5 Defining Arrays

6.9 Operators in VHDL

6.9.1 Arithmetic Operators

6.9.2 Concatenation Operator

6.10 Application on Data Types and Operators

6.11 FPGA Building Blocks Used In Data Types and Operators

6.11.1 Implementation Details of Vector Operations

6.11.2 Implementation Details of Arithmetic Operations

6.12 Summary

6.13 Exercises

 

7 Combinational Circuits

7.1 Basic Definitions

7.1.1 Binary Variable

7.1.2 Logic Function

7.1.3 Truth Table

7.2 Logic Gates

7.2.1 The NOT Gate

7.2.2 The OR Gate

7.2.3 The AND Gate

7.2.4 The XOR Gate

7.3 Combinational Circuit Analysis

7.3.1 Logic Function Formation between Input and Output

7.3.2 Boolean Algebra

7.3.3 Gate-Level Minimization

7.4 Combinational Circuit Implementation

7.4.1 Truth Table-Based Implementation

7.4.2 Implementing One-Input Combinational Circuits

7.4.3 Implementing Two-Input Combinational Circuits

7.4.4 Implementing Three-Input Combinational Circuits

7.5 Combinational Circuit Design

7.5.1 Analyzing the Problem to Be Solved

7.5.2 Selecting a Solution Method

7.5.3 Implementing the Solution

7.6 Sample Designs

7.6.1 Home Alarm System

7.6.2 Digital Safe System

7.6.3 Car Park Occupied Slot Counting System

7.7 Applications on Combinational Circuits

7.7.1 Implementing the Home Alarm System

7.7.2 Implementing the Digital Safe System

7.7.3 Implementing the Car Park Occupied Slot Counting System

7.8 FPGA Building Blocks Used in Combinational Circuits

7.9 Summary

7.10 Exercises

 

8 Combinational Circuit Blocks

8.1 Adders

8.1.1 Half Adder

8.1.2 Full Adder

8.1.3 Adders in Verilog

8.1.4 Adders in VHDL

8.2 Comparators

8.2.1 Comparators in Verilog

8.2.2 Comparators in VHDL

8.3 Decoders

8.3.1 Decoders in Verilog

8.3.2 Decoders in VHDL

8.4 Encoders

8.4.1 Encoders in Verilog

8.4.2 Encoders in VHDL

8.5 Multiplexers

8.5.1 Multiplexers in Verilog

8.5.2 Multiplexers in VHDL

8.6 Parity Generators and Checkers

8.6.1 Parity Generators

8.6.2 Parity Checkers

8.6.3 Parity Generators and Checkers in Verilog

8.6.4 Parity Generators and Checkers in VHDL

8.7 Applications on Combinational Circuit Blocks

8.7.1 Improving the Calculator

8.7.2 Improving the Home Alarm System

8.7.3 Improving the Car Park Occupied Slot Counting System

8.8 FPGA Building Blocks Used in Combinational Circuit Blocks

8.9 Summary

8.10 Exercises

 

9 Data Storage Elements

9.1 Latches

9.1.1 SR Latch

9.1.2 D Latch

9.1.3 Latches in Verilog

9.1.4 Latches in VHDL

9.2 Flip—Flops

9.2.1 D Flip-Flop

9.2.2 JK Flip-Flop

9.2.3 T Flip-Flop

9.2.4 Flip-Flops in Verilog

9.2.5 Flip-Flops in VHDL

9.3 Register

9.4 Memory

9.5 Read-Only Memory

9.5.1 ROM in Verilog

9.5.2 ROM in VHDL

9.5.3 ROM Formation Using IP Blocks

9.6 Random Access Memory

9.7 Application on Data Storage Elements

9.8 FPGA Building Blocks Used in Data Storage Elements

9.9 Summary

9.10 Exercises

 

10 Sequential Circuits

10.1 Sequential Circuit Analysis

10.1.1 Definition of State

10.1.2 State and Output Equations

10.1.3 State Table

10.1.4 State Diagram

10.1.5 State Representation in Verilog

10.1.6 State. Representation in VHDL

10.2 Timing in Sequential Circuits

10.2.1 Synchronous Operation

10.2.2 Asynchronous Operation

10.3 Shift Register as a Sequential Circuit

10.3.1 Shift Registers in Verilog

10.3.2 Shift Registers in VHDL

10.3.3 Multiplication and Division Using Shift Registers

10.4 Counter as a Sequential Circuit

10.4.1 Synchronous Counter

10.4.2 Asynchronous Counter

10.4.3 Counters in Verilog

10.4.4 Counters in VHDL

10.4.5 Frequency Division Using Counters

10.5 Sequential Circuit Design

10.6 Applications on Sequential Circuits

10.6.1 Improving the Home Alarm System

10.6.2 Improving the Digital Safe System

10.6.3 Improving the Car Park Occupied Slot Counting System

10.6.4 Vending Machine

10.6.5 Digital Clock

10.7 FPGA Building Blocks Used in Sequential Circuits

10.8 Summary

10.9 Exercises

 

11 Embedding a Soft-Core Microcontroller

11.1 Building Blocks of a Generic Microcontroller

11.1.1 Central Processing Unit

11.1.2 Arithmetic Logic Unit

11.1.3 Memory

11.1.4 Oscillator / Clock

11.1.5 General Purpose Input/Output

11.1.6 Other Blocks

11.2 Xilinx PicoBlaze Microcontroller

11.2.1 Functional Blocks of PicoBlaze

11.2.2 PicoBlaze in Verilog

11.2.3 PicoBlaze in VHDL

11.2.4 PicoBlaze Application on the Basys3 Board

11.3 Xilinx MicroBlaze Microcontroller

11.3.1 MicroBlaze as an IP Block in Vivado

11.3.2 MicroBlaze MCS Application on the Basys3 Board

11.4 Soft-Core Microcontroller Applications

11.5 FPGA Building Blocks Used in Soft—Core Microcontrollers

11.6 Summary

11.7 Exercises

 

12 Digital Interfacing

12.1 Universal Asynchronous Receiver/ Transmitter

12.1.1 Working Principles of UART

12.1.2 UART in Verilog

12.1.3 UART in VHDL

12.1.4 UART Applications

12.2 Serial Peripheral Interface

12.2.1 Working Principles of SPI

12.2.2 SPI in Verilog

12.2.3 SPI in VHDL

12.2.4 SPI Application

12.3 Inter-Integrated Circuit

12.3.1 Working Principles of I2C

12.3.2 I2C in Verilog

12.3.3 I2C in VHDL

12.3.4 I2C Application

12.4 Video Graphics Array

12.4.1 Working Principles of VGA

12.4.2 VGA in Verilog

12.4.3 VGA in VHDL

12.4.4 VGA Application

12.5 Universal Serial Bus

12.5.1 USB-Receiving Module in Verilog

12.5.2 USB-Receiving Module in VHDL

12.5.3 USB Keyboard Application

12.6 Ethernet

12.7 FPGA Building Blocks Used in Digital Interfacing

12.8 Summary

12.9 Exercises

 

13 Advanced Applications

13.1 Integrated Logic Analyzer 1P Core Usage

13.2 The XADC Block Usage

13.3 Adding Two Floating-Point Numbers

13.4 Calculator

13.5 Home Alarm System

13.6 Digital Safe System

13.7 Car Park Occupied Slot Counting System

13.8 Vending Machine

13.9 Digital Clock

13.10 Moving Wave Via LEDs

13.11 Translator

13.12 Air Freshener Dispenser

13.13 0bstacle-Avoiding Tank

13.14 Intelligent Washing Machine

13.15 Non-Touch Paper Towel Dispenser

13.16 Traffic Lights

13.17 Car Parking Sensor System

13.18 Body Weight Scale

13.19 Intelligent Billboard

13.20 Elevator Cabin Control System

13.21 Digital Table Tennis Game

13.22 Customer Counter

13.23 Frequency Meter

13.24 Pedometer

 

14 What Is Next?

14.1 Vivado High-Level Synthesis Platform

14.2 Developing a Project in Vivado HLS to Generate IP

14.3 Using the Generated IP in Vivado

14.4 Summary

14.5 Exercises

 

References

Index

 

 

 

Note: In an acknowledgement in the book, the authors thank Xilinx's Cathal McCabe, an AE working within the Xilinx University Program, for his guidance and assistance. They also thank thank Digilent for allowing them to use the Basys3 and Arty board images and sample projects in the book.

 

MathWorks has just published a 4-part mini course that teaches you how to develop vision-processing applications using MATLAB, HDL Coder, and Simulink, then walks you through a practical example targeting a Xilinx Zynq SoC using a lane-detection algorithm in Part 4.

 

 

Click here for each of the classes:

 

 

 

 

 

 

 

How to power your Zynq or Spartan-7 devices: New 6-minute video contains many proven examples

by Xilinx Employee ‎09-27-2017 10:49 AM - edited ‎09-28-2017 08:06 AM (3,175 Views)

 

Different system performance, size, efficiency, and cost goals call for using different Xilinx devices and different solutions to power those devices. Xilinx has partnered with many leading power-semiconductor vendors to provide you with a wide selection of proven power designs and a brand new video discusses six of these designs for systems based on Xilinx Zynq UltraScale+ MPSoCs, Zynq Z-7000 SoCs, and Spartan-7 FPGAs in a shockingly brief six minutes. The video’s examples were provided by:

 

 

  • Infineon, as used in the Avnet UltraZed EG kit based on a Xilinx Zynq UltraScale+ MPSoC
  • TI, as used to power a Zynq UltraScale+ MPSoC in remote radio head applications
  • Maxim, as used to power a Zynq UltraScale+ MPSoC, also for remote radio head applications
  • Monolithic Power Systems (MPS) for Zynq Z-7000 SoCs
  • Exar for Zynq Z-7000 SoCs
  • Dialog Semiconductor for Spartan-7 FPGAs

 

 

One size can’t fit all when it comes to power.

 

Here’s the video:

 

 

 

 

 

 

For more information on these power solutions plus more, click here.

 

 

 

 

Eye Vision Technology (EVT) has announced a tiny new line in its EyeCheck series of smart industrial cameras. According to the EVT announcement, the EyeCheck ZQ smart camera is “hardly bigger than an index finger” (no precise specifications available). Even at that size, the EyeCheck ZQ camera manages to integrate four illuminating LEDs around the camera’s lens. An onboard Xilinx Zynq SoC adds the camera’s intelligence and makes the camera software-programmable for applications including bar, DMC, and QR code reading; pattern matching; counting and measuring objects; and object error detection for detecting visible manufacturing defects, as an example.

 

 

EVT EyeCheck ZQ Smart Camera.jpg 

 

 

EVT’s Zynq-based EyeCheck ZQ smart camera is a little larger than an index finger

 

 

 

Programming involved drag-and-drop, graphical-based programming using the company’s EyeVision software. According to the EVT announcement, use of the Zynq SoC allows the camera to run applications much faster than any other camera in the company’s EyeCheck series. EyeVision commands take the form of icons, which can be lined up. As an example, a bar-code-reading application program requires only three icons. The EyeCheck ZQ camera is also available as a pre-programmed vision sensor, which EVT calls its EyeSens series.

 

Because of its high processing speed, small size, and light weight, the EyeCheck ZQ smart camera is well suited for smart-imaging applications including robot arms used for production lines in the automotive or electronics industry.

 

 

 

 

Linaro Logo.jpg Today, Linaro Ltd announced that Xilinx has joined the Linaro IoT and Embedded (LITE) Segment Group, which is a collaborative working group of companies working to reduce fragmentation in operating systems, middleware, and cloud connectivity by delivering open-source device reference platforms to enable faster time to market, improved security, and lower maintenance costs for connected products.

 

LITE’s Director Matt Locke said: “Discussions between Linaro and Xilinx have ranged from LITE gateway and security work through networking, 96Boards, and Xilinx All Programmable SoC and MPSoC platforms. I expect initial collaboration will focus on the gateway, but I look forward to building on this relationship to bring the benefits of collaborative, open-source engineering to other areas in Xilinx’s broad range of product offerings.”

 

Today’s announcement focuses on LITE’s ongoing work in the IoT and embedded space. “Becoming a member of the LITE group will enable Xilinx to optimize the Linaro open source stacks with our All Programmable SoCs”, said Tomas Evensen, CTO Embedded Software at Xilinx.

 

The Linaro tools and Linux kernel have been running on the Xilinx Zynq Z-7000 SoC’s on-chip ARM Cortex-A9 MPCore APU processors for several years. For example, here’s a 4-year-old article from 2013 by Professor Dr. Rüdiger Heintz of DHBW Mannheim (Baden-Wuerttemberg Cooperative State University Mannheim) titled “Development with Zynq – Part 4 – Boot Linaro from SD Card” that describes work with Avnet’s Zynq-based Zedboard and versions of this Analog Devices Wiki post titled “Linux with HDMI video output on the ZED, ZC702 and ZC706 boards” date all the way back to the middle of 2012.

 

 

Xylon has a new hardware/software development kit for quickly implementing embedded, multi-camera vision systems for ADAS and AD (autonomous driving), machine-vision, AR/VR, guided robotics, drones, and other applications. The new logiVID-ZU Vision Development Kit is based on the Xilinx Zynq UltraScale+ MPSoC and includes four Xylon 1Mpixel video cameras based on the TI FPD (flat-panel display) Link-III interface. The kit supports HDMI video input and display output and comes complete with extensive software deliverables including pre-verified camera-to-display SoC designs built with licensed Xylon logicBRICKS IP cores, reference designs and design examples prepared for the Xilinx SDSoC Development Environment, and complete demo Linux applications.

 

 

 

 

Xylon logiVID-ZU Vision Development Kit .jpg 

 

Xylon’s new logiVID-ZU Vision Development Kit

 

 

 

Please contact Xylon for more information about the new logiVID-ZU Vision Development Kit.

 

 

 

 

 

 

By Adam Taylor

 

Recently I received two different questions from engineers on how to use SPI with the Zynq SoC and Zynq UltraScale+ MPSoC. Having answered these I thought a detailed blog on the different uses of SPI would be of interest.

 

 

Image1.jpg 

 

 

 

When we use a Zynq SoC or Zynq UltraScale+ MPSoC in our design we have two options for implementing SPI interfaces:

 

 

  • Use one of the two SPI controllers within the Processing System (PS)
  • Use an AXI Quad SPI (QSPI) IP module, configured for standard SPI communication within the programmable logic (PL)

 

 

Selecting which controller to use comes down to understanding the application’s requirements. Both SPI implementations can support all four SPI modes and both can function as either a SPI master or SPI slave. However, there are some suitable differences as the table below demonstrates:

 

 

Image2.jpg 

 

 

Initially, we will examine using the SPI controller integrated into the PS. We include this peripheral within our design by selecting the SPI controller within the Zynq MIO configuration tab. For this example I will route the SPI signals to the ARTY Z7 SPI connector, which requires use of EMIO via the PL I/O.

 

 

Image3.jpg

 

 

Enabling the SPI and mapping to the EMIO

 

 

 

With this selected all that remains within Vivado, is to connect the I/O from the SPI ports. How we do this depends upon whether we want a master or salve SPI implementation. Looking at the available ports on the SPI Controller, you will notice there are matching input (marked xxx_i) and output (marked xxx_o) ports for each SPI port. It is very important that we correctly connect these ports based on the master or slave implementation. Failure to do so will lead to hours of head scratching later when we develop the software because nothing will work as expected if we get the connections wrong. In addition, there is one Slave Select input when the controller is used as a SPI slave and three select pins for use in SPI master mode.

 

Once the I/O is correctly configured and the project built, we configure the SPI controller as either a master or slave using the SPI configuration options within the application software. To both configure and transfer data using the PS SPI controller, we use the API provided with the BSP, which is defined by XSPIps.H. In this first example, we will configure the PS SPI controller as the SPI Master.

 

By default, SPI transfers are 8-bit transfers. However we can transmit larger 16- or 32-bit words as well. To transmit 8-bit words, we use the type u8 within our C application. For 16- or 32-bit transfers, we use types u16 or u32 respectively for the read and write buffers.

 

At first, this may appear to cause a problem or at least generate a compiler warning because both API functions that perform data transfers require a u8 input for the transmit and receive buffers as shown below:

 

 

s32 XSpiPs_Transfer(XSpiPs *InstancePtr, u8 *SendBufPtr, u8 *RecvBufPtr, u32 ByteCount);

 

s32 XSpiPs_PolledTransfer(XSpiPs *InstancePtr, u8 *SendBufPtr, u8 *RecvBufPtr, u32 ByteCount);

 

 

To address this issue when using u16 or u32 types, we need to cast the buffers to a u8 pointer as demonstrated:

 

 

XSpiPs_PolledTransfer(&SpiInstance, (u8*)&TxBuffer, (u8*)&RxBuffer, 8);

 

 

This allows us to work with transfer sizes of 8, 16 or 32 bits. To demonstrate this, I connected the SPI master example to a Digilent Digital Discovery to capture the transmitted data. With the data width changed on the fly from 8- to 16-bits using the above methods in the application software.

 

 

Image4.jpg 

 

Zynq SoC PS SPI Master transmitting four 8-bit words

 

 

Image5.jpg

 

PS SPI Master transmitting four 16-bit words

 

 

The alternative to implementing a SPI interface using the Zynq PS is to implement an AXI QSPI IP core within the Zynq PS. Doing this requires more options being set in the Vivado design, which will limit run-time flexibility. Within the AXI QSPI configuration dialog, we can configure the transaction width, frequency, and number of slaves. One of the most important things we also configure here is whether the AXI QSPI IP core will act as a SPI slave or master. To enable a SPI master, you must check the enable master mode option. If this module is to operate as a slave, this option must be unchecked to ensure the SPISel input pin is present. When the SPI IP core acts as a slave, this pin must be connected to the master’s slave select.

 

 

 

Image6.jpg

 

Configuring the AXI Quad SPI

 

 

 

As with the PS SPI controller, the BSP also provides an API for the SPI IP. We use it to develop the application software. This API is defined within the file XSPI.h. I used this API to configure the AXI QSPI as a SPI slave for the second part of the example.

 

To demonstrate the AXI QSPI core working properly as a SPI Slave once I had created the software. I used Digilent’s Digital Discovery to act as the SPI master, allowing data to be easily transferred between the two.

 

 

 

Image7.jpg

 

 

Transmitting and Receiving Data with the SPI slave. (Blue data is output by the Zynq SPI Slave)

 

 

 

The final design created in Vivado to support both these examples has been uploaded to github.

 

 

 

Image8.jpg 

 

 

Final example block diagram

 

 

 

 

Of course, if you are using a Xilinx FPGA in place of a Zynq SoC or Zynq UltraScale MPSoC, it is possible to use a MicroBlaze soft processor with the same AXI QSPI configuration to implement a SPI interface. Just remember to correctly define it as a master or slave.  

 

I hope this helps outline how we can create both master and slave SPI systems using the two different approaches.

 

 

Code is available on Github as always.

 

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

First Year E Book here

First Year Hardback here.

 

 

 

MicroZed Chronicles hardcopy.jpg  

 

 

 

Second Year E Book here

Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

For a limited time, Digilent is offering a $100 discount (that’s half off) on its Digital Discovery, a USB 24-channel logic analyzer (800Msamples/sec max acquisition rate @ 8 bits) and 16-channel digital pattern generator. (See “$199.99 Digital Discovery from Digilent implements 800Msample/sec logic analyzer, pattern generator. Powered by Spartan-6” for more information on this interesting little instrument.) To get the discount, you need to order at least $500 worth of items in the FPGA category on Digilent’s Web site.

 

 

Digilent Digital Discovery Fall Promo.jpg 

 

 

 

Digilent offers a truly wide selection of development and trainer boards in this category. Officially listed are boards based on Xilinx Spartan-6, Spartan-7, Artix-7, Kintex-7, and Virtex-7 FPGAs. You’ll also find several boards based on the Xilinx Zynq Z-7000 SoC including the Zybo, Arty Z7, ZedBoard, and the Python-based PYNQ-Z1. If you’re into vintage design, you’ll even find some older Xilinx devices on boards in this list including the CoolRunner-II CPLD and the Spartan-3E FPGA.

 

Why the deal on a Digilent Digital Discovery board and how does this deal work? Details in this new Digilent post by Kaitlyn Franz titled “Debugging Done Right.”

 

 

 

 

Free Webinar: Developing Linux Systems on Zynq UltraScale+ MPSoC Embedded Systems Using Yocto. October 6

by Xilinx Employee ‎09-20-2017 10:40 AM - edited ‎09-20-2017 10:42 AM (2,705 Views)

 

Earlier this year, I wrote a blog about a free Doulos Webinar titled “Developing Linux Systems on the Zynq SoC Using Yocto” and more than 13,000 people read that blog so I’m happy to say that Doulos has updated that Webinar, which is now titled “Developing Linux Systems on Zynq UltraScale+ Using Yocto.” Despite the title, the Webinar appears to cover Linux for both the Zynq SoC and Zynq UltraScale+ MPSoC. Doulos Senior MTS Simon Goda will be the presenter once again.

 

The webinar will cover:

 

  • A detailed introduction to creating a bootable Linux system
  • How to customize Linux to fit your project’s specific requirements
  • How Xilinx supports Yocto using the meta-xilinx layer

 

 

Once again, the Webinar is free and will be presented at two different times on October 6 to accommodate the widest number of time zones worldwide.

 

For more information and to register, click here.

 

By the way, it’s still free.

 

Guitar pedal aficionados will love this winning entry in the Xilinx University Program’s Open Hardware 2017 competition, Student Category. It’s a senior-year project from Vladi Litmanovich and Adi Mikler from Tel Aviv University and it’s based on an Avnet ZedBoard Dev Kit. The on-board Zynq Z-7020 SoC accepts audio from an electric guitar, passes it through four effects processors, sums the resulting audio, and ships the audio to a guitar amplifier. The result is a multi-effect processor similar to the stacked pedal processors favored by musicians for the last 50 years to achieve just the right sound for each song.

 

The on-board Zynq SoC implements the multi-effects processor’s user interface, based on the ZedBoard’s switches and LEDs, and implements four real-time audio effects using the Zynq SoC’s programmable logic:

 

  • Distortion and Overdrive
  • Octavelo (an Octaver plus Tremolo)
  • Tremolo
  • Delay

 

Here’s a block diagram of the audio-processing chain:

 

 

Guitar Multi-Effects Processor Block Diagram.jpg

 

 

 

There’s a complete description of the project with implementation details on GitHub.

 

 

Because this is music, a video demo is much better for illustrating these audio effects:

 

 

 

 

 

 

 

 

 

Farhad Fallahlalehzari, an Aldec Application Engineer, has just published a blog on the Aldec Web site titled “Demystifying AXI Interconnection for Zynq SoC FPGA.” So if it’s a mystery, please click over to that blog post so that the topic will no longer mystify.

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.