UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Hot (and Cold) Stuff: New Spartan-7 1Q Commercial-Grade FPGAs go from -40 to +125°C!

by Xilinx Employee ‎03-22-2017 05:12 PM - edited ‎03-22-2017 05:15 PM (355 Views)

 

There’s a new line in the table for Spartan-7 FPGAs in the FPGA selection guide on page 2 showing an expanded-temperature range option of -40 to +125°C for all six family members. These are “Expanded Temp” -1Q devices. So if you have the need for extreme hi-temp (or low-temp) operation, you might want to check into these devices. Ask your friendly neighborhood Xilinx or Avnet sales rep.

 

 

Spartan-7 Family Table with 1Q devices.jpg
 

 

 

For more information about the Spartan-7 FPGA product family, see:

 

 

 

 

 

 

 

Scary good 3-minute troubleshooting demo of National Instruments’ VirtualBench All-in-One Instrument

by Xilinx Employee ‎03-22-2017 01:40 PM - edited ‎03-23-2017 07:14 AM (303 Views)

 

National Instruments’ (NI’s) VirtualBench All-in-One Instrument, based on the Xilinx Zynq Z-7020 SoC, combines a mixed-signal oscilloscope with protocol analysis, an arbitrary waveform generator, a digital multimeter, a programmable DC power supply, and digital I/O. The PC- or tablet-based user-interface software allows you to make all of those instruments play together as a troubleshooting symphony. That point is made very apparent in this new 3-minute video demonstrating the speed at which you can troubleshoot circuits using all of the VirtualBench’s capabilities in concert:

 

 

 

 

 

For more Xcell Daily blog posts about the NI VirtualBench All-in-One instrument, see:

 

 

 

 

 

For more information about the VirtualBench, please contact NI directly.

 

Arty Z7: Digilent’s new Zynq SoC trainer and dev board—available in two flavors for $149 or $209

by Xilinx Employee ‎03-22-2017 11:18 AM - edited ‎03-22-2017 04:09 PM (2,120 Views)

 

I’ve known this was coming for more than a week, but last night I got double what I expected. Digilent’s Web site has been teasing the new Arty Z7 Zynq SoC dev board for makers and hobbyists for a week—but with no listed price. Last night, prices appeared. That’s right, there are two versions of the board available:

 

  • The $149 Arty Z7-10 based on a Zynq Z-7010 SoC
  • The $209 Arty Z7-20 based on a Zynq Z-7020 SoC

 

 

Digilent Arty Z7.jpg 

 

Digilent Arty Z7 dev board for makers and hobbyists

 

 

 

Other than that, the board specs appear identical.

 

The first thing you’ll note from the photo is that there’s a Zynq SoC in the middle of the board. You’ll also see the board’s USB, Ethernet, Pmod, and HDMI ports. On the left, you can see double rows of tenth-inch headers in an Arduino/chipKIT shield configuration. There are a lot of ways to connect to this board, which should make it a student’s or experimenter’s dream board considering what you can do with a Zynq SoC. (In case you don’t know, there’s a dual-core ARM Cortex-A9 MPCore processor on the chip along with a hearty serving of FPGA fabric.)

 

Oh yeah. The Xilinx Vivado HL Design Suite WebPACK tools? Those are available at no cost. (So is Digilent’s attractive cardboard packaging, according to Arty Z7 Web page.)

 

Although the Arty Z7 board has now appeared on Digilent’s Web site, the product’s Web page says the expected release date is March 27. That’s five whole days away!

 

As they say, operators are standing by.

 

 

Please contact Digilent directly for more Arty Z7 details.

 

 

 

 

You want to learn how to design with and use RF, right? Students from all levels and backgrounds looking to improve their RF knowledge will want to take a look at the new ADALM-PLUTO SDR USB Learning Module from Analog Devices. The $149 USB module has an RF range of 325MHz to 3.8GHz with separate transmit and receive channels and 20MHz of instantaneous bandwidth. It pairs two devices that seem made for each other: an Analog Devices AD9363 Agile RF Transceiver and a Xilinx Zynq Z-7010 SoC.

 

 

 

Analog Devices ADALM-PLUTO SDR USB Learning Module.jpg 

 

 

Analog Devices’ $149 ADALM-PLUTO SDR USB Learning Module

 

 

Here’s an extremely simplified block diagram of the module:

 

 

Analog Devices ADALM-PLUTO SDR USB Learning Module Block Diagram.jpg

 

 

Analog Devices’ ADALM-PLUTO SDR USB Learning Module Block Diagram

 

 

However, the learning module’s hardware is of little use without training material and Analog Devices has already created dozens of online tutorials and teaching materials for this device including ADS-B aircraft position, receiving NOAA and Meteor-M2 weather satellite imagery, GSM analysis, listening to TETRA signals, and pager decoding.

 

 

Digilent says that its new $199.99 Digital Discovery—a low-cost USB instrument that combines a 24-channel, 800Msamples/sec logic analyzer; a 16-bit, 100Msamples/sec digital pattern generator; and a 100mA power supply—“was created to be the ultimate embedded development companion.” Further, “Its features and specifications were deliberately chosen to maintain a small and portable form factor, withstand use in a variety of environments, and keep costs down, while balancing the requirements of operating on USB Power.”

 

(Note: Skip to the bottom of this blog for a limited-time offer. Then come back.)

 

Here’s a photo of the Digital Discovery:

 

 

Digilent Digital Discovery Module.jpg

 

Digilent’s Digital Discovery—a combined 24-channel, 800Msamples/sec logic analyzer and 16-bit, 100Msamples/sec digital pattern generator

 

 

If that form factor looks familiar, you’re probably reminded of the company’s Analog Discovery and Analog Discovery 2 100Msamples/sec USB DSO, logic analyzer, and power supply. (See “$279 Analog Discovery 2 DSO, logic analyzer, power supply, etc. relies on Spartan-6 for programmability, flexibility.) And just like the Analog Discovery modules, the Digilent Digital Discovery is based on a Xilinx Spartan-6 FPGA (an LX25), which becomes pretty clear when you look at the product’s board photo. The Spartan-6 FPGA is right there in the center of the board:

 

 

Digilent Digital Discovery Module Board.jpg

 

Digilent’s Digital Discovery is based on a Xilinx Spartan-6 FPGA

 

 

When I write “based on,” what I mean to say is that Digilent’s IP in the Spartan-6 FPGA pretty much implements the entire low-cost instrument—as clearly shown in the block diagram:

 

 

Digilent Digital Discovery Module Block Diagram.jpg

 

 

Digilent’s Digital Discovery Module Block Diagram

 

 

And what does this $199.99 instrument do, considering that it’s implemented using a low-cost FPGA? Here are the specs (and pretty impressive specs they are):

 

 

  • 24-channel, 800Msamples/sec* digital logic analyzer (1.2…3.3V CMOS)
  • 16-channel, 100Msamples/sec pattern generator (1.2…3.3V)
  • 16-channel virtual digital I/O including buttons, switches, and LEDs – perfect for logic training applications
  • Two input/output digital trigger signals for linking multiple instruments (1.2…3.3V CMOS)
  • A programmable power supply of 1.2…3.3V/100mA. The same voltage supplies the Logic Analyzer input buffers and the Pattern Generator input/output buffers, for keeping the logic level compatibility with the circuit under test.
  • Digital Bus Analyzers (SPI, I²C, UART, Parallel)

 

*Note: to obtain speeds of 200MS/s and higher, the High Speed Adapter must be used.

 

 

So, you may have noted that asterisk on the logic analyzer's maximum sample rate. You need a Digital Discovery High Speed Adapter to attain the full 800Msamples/sec acquisition rate on the logic analyzer. Normally, that’s another $49.99. However, for the first 100 Digital Discovery buyers, Digilent is throwing in the high-speed adapter in for free.

 

Operators are standing by.

 

 

 

 

 

Image3.jpgAEye is the latest iteration of the eye-tracking technology developed by EyeTech Digital Systems. The AEye chip is based on the Zynq Z-7020 SoC. It’s located immediately adjacent to the imaging sensor, which creates compact, stand-alone systems. This technology is finding its way into diverse vision-guided systems in the automotive, AR/VR, and medical diagnostic arenas. According to EyeTech, the Zynq SoC’s unique abilities allows the company to create products they could not do any other way.

 

With the advent of the reVISION stack, EyeTech is looking to expand its product offerings into machine learning, as discussed in this short, 3-minute video:

 

 

 

 

 

 

For more information about EyeTech, see:

 

 

 

 

Zynq + PYNQ + Python + BNNs: Machine inference does not get any easier… or faster

by Xilinx Employee ‎03-14-2017 03:10 PM - edited ‎03-15-2017 10:25 AM (3,435 Views)

 

Machine learning and machine inference based on CNNs (convolutional neural networks) are the latest way to classify images and, as I wrote in Monday’s blog post about the new Xilinx reVISION announcement, “The last two years have generated more machine-learning technology than all of the advancements over the previous 45 years and that pace isn't slowing down.” (See “Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge.”) The challenge now is to make the CNNs run faster while consuming less power. It would be nice to make them easier to use as well.

 

OK, that’s a setup. A paper published last month at the 25th International Symposium on Field Programmable Gate Arrays titled “FINN: A Framework for Fast, Scalable Binarized Neural Network Inference” describes a method to speed up CNN-based inference while cutting power consumption by reducing CNN precision in the inference machines. As the paper states:

 

…a growing body of research demonstrates this approach [CNN] incorporates significant redundancy. Recently, it has been shown that neural networks can classify accurately using one- or two-bit quantization for weights and activations.  Such a combination of low-precision arithmetic and small memory footprint presents a unique opportunity for fast and energy-efficient image classification using Field Programmable Gate Arrays (FPGAs). FPGAs have much higher theoretical peak performance for binary operations compared to floating point, while the small memory footprint removes the off-chip memory bottleneck by keeping parameters on-chip, even for large networks. Binarized Neural Networks (BNNs), proposed by Courbariaux et al., are particularly appealing since they can be implemented almost entirely with binary operations, with the potential to attain performance in the teraoperations per second (TOPS) range on FPGAs.

 

The paper then describes the techniques developed by the authors to generate BNNs and instantiate them into FPGAs. The results, based on experiment using a Xilinx ZC706 eval kit based on a Zynq Z-7045 SoC, are impressive:

 

When it comes to pure image throughput, our designs outperform all others. For the MNIST dataset, we achieve an FPS which is over 48/6x over the nearest highest throughput design [1] for our SFC-max/LFC-max designs respectively. While our SFC-max design has lower accuracy than the networks implemented by Alemdar et al. for our LFC-max design outperforms their nearest accuracy design by over 6/1.9x for throughput and FPS/W respectively. For other datasets, our CNV-max design outperforms TrueNorth for FPS by over 17/8x for CIFAR-10 / SVHN datasets respectively, while achieving 9.44x higher throughput than the design by Ovtcharov et al., and 2:2x over the fastest results reported by Hegde et al. Our prototypes have classification accuracy within 3% of the other low-precision works, and could have been improved by using larger BNNs.

 

There’s something even more impressive, however. This design approach to creating BNNs is so scalable that it’s now on a low-end platform—the $229 Digilent PYNQ-Z1. (Digilent’s academic price for the PYNQ-Z1 is only $65!) Xilinx Research Labs in Ireland, NTNU (Norwegian U. of Science and Technology), and the U. of Sydney have released an open-source Binarized Neural Network (BNN) Overlay for the PYNQ-Z1 based on the work described in the above paper.

 

According to Giulio Gambardella of Xilinx Reseach Labs, “…running on the PYNQ-Z1 (a smaller Zynq 7020), [the PYNQ-Z1] can achieve 168,000 image classifications per second with 102µsec latency on the MNIST dataset with 98.40% accuracy, and 1700 images per seconds with 2.2msec latency on the CIFAR-10, SVHN, and GTSRB dataset, with 80.1%, 96.69%, and 97.66% accuracy respectively running at under 2.5W.”

 

 

PYNQ-Z1.jpg

 

Digilent PYNQ-Z1 board, based on a Xilinx Zynq Z-7020 SoC

 

 

 

Because the PYNQ-Z1 programming environment centers on Python and the Jupyter development environment, there are a number of Jupyter notebooks associated with this package that demonstrate what the overlay can do through live code that runs on the PYNQ-Z1 board, equations, visualizations and explanatory text and program results including images.

 

There are also examples of this BNN in practical application:

 

 

 

 

For more information about the Digilent PYNQ-Z1 board, see “Python + Zynq = PYNQ, which runs on Digilent’s new $229 pink PYNQ-Z1 Python Productivity Package.

 

 

 

 

The amazing “snickerdoodle one”—a low-cost, single-board computer with wireless capability based on the Xilinx Zynq Z-7010 SoC—is once more available for purchase on the Crowd Supply crowdsourcing Web site. Shipments are already going out to existing backers and, if you missed out on the original crowdsourcing campaign, you can order one for the post-campaign price of $95. That’s still a huuuuge bargain in my book. (Note: There is a limited number of these boards available, so if you want one, now’s the time to order it.)

 

In addition, you can still get the “snickerdoodle black” with a faster Zynq Z-7020 SoC and more SDRAM that also includes an SDSoC software license, all for $195. Finally, snickerdoodle’s creator krtkl has added two mid-priced options: the snickerdoodle prime and snickerdoodle prime LE—also based on Zynq Z-7020 SoCs—for $145.

 

 

Snickerdoodle.jpg

The krtkl snickerdoodle low-cost, single-board computer based on a Xilinx Zynq SoC

 

 

 

Ryan Cousins at krtkl sent me this table that helps explain the differences among the four snickerdoodle versions:

 

 

Snickerdoodle table.jpg

 

 

 

For more information about krtkl’s snickerdoodle SBC, see:

 

 

 

 

 

 

 

 

 

 

Today, Keysight Technologies announced its low-cost InfiniiVision 1000 X-Series oscilloscopes with 50MHz and 100MHz models that start at $449. (Apparently, this is Scope Month and Keysight is giving away 125 DSOs in March—click here for more info.) Even at that low price, these 2-channel 1000 X-Series DSOs are based on Keysight’s high-performance MegaZoom IV custom ASIC technology, which enables a high 50,000 waveforms/sec update rate.

 

 

Keysight 1000 X-Series DSO.jpg

 

 

Keysight 1000 X-Series DSO

 

 

 

These new scopes are intended for students and new users. In fact, the two 50MHz models have “EDU” as a model number prefix. Here’s a table listing the salient features of the four models in the 1000 X-Series family:

 

 

Keysight 1000 X-Series DSO Table.jpg

 

 

 

All four models also operate as a serial protocol analyzer, digital voltmeter, and frequency counter. The EDUX1002G and DSOX1102G models include a frequency response analyzer and function generator.

 

So why are you reading about this very nice, new Keysight instrument in the Xilinx Xcell Daily blog? Well, Keysight supplied one of these new DSOs to my good friend Dave Jones at eevblog.com and of course, he did a teardown; and of course, he found a Xilinx FPGA inside. That's why.

 

Here’s Dave Jones’ 30-minute teardown video of the new Keysight 1000 X-series DSO:

 

 

 

 

 

This DSO features a 2-board construction. The main board is mostly analog and a small daughter board contains the high-speed ADC, the Keysight MegaZoom IV ASIC, an STMicroelectronics SPEAR600 application processor, and a low-end Xilinx Spartan-3E XC3S500E FPGA. That’s a lot of processing power in a very inexpensive DSO family that starts at $450.

 

Here’s Dave’s closeup shot of that digital daughtercard showing the Xilinx FPGA positioned between the SPEAR600 processor and the Keysight MegaZoom IV ASIC (under the finned heat sink):

 

 

 

Keysight 1000 X-Series DSO Digital Daughtercard.jpg 

 

 

Now the Spartan-3E FPGA is not a particularly new device; it was announced more than a decade ago. The Spartan-3E FPGA family was designed from the start to be cost-effective, which is no doubt why Keysight used it in this DSO design. Its appearance in a just-introduced product is a testament to Xilinx’s ongoing commitment to device availability over the long term.

 

 

Please contact Keysight directly for more information about the InfiniiVision 1000 X-Series DSOs.

 

 

 

With a month left in the Indiegogo funding period, the MATRIX Voice open-source voice platform campaign stands at 289% of its modest $5000 funding goal. MATRIX Voice is the third crowdfunding project by MATRIX Labs, based on Miami, Florida. The MATRIX Voice platform is a 3.14-inch circular circuit board capable of continuous voice recognition and compatible with the latest voice-based, cognitive cloud-based services including Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai, and Houndify. The MATRIX Voice board, based on a Xilinx Spartan-6 LX4 FPGA, is designed to plug directly onto a low-cost Raspberry Pi single-board computer or it can be operated as a standalone board. You can get one of these boards, due to be shipped in May, for as little as $45—if you’re quick. (Already, 61 of the 230 early-bird special-price boards are pledged.)

 

Here’s a photo of the MATRIX Voice board:

 

 

MATRIX Voice board.jpg

 

 

This image of the top of the MATRIX Voice board shows the locations for the seven rear-mounted MEMS microphones, seven RGB LEDs, and the Spartan-6 FPGA. The bottom of the board includes a 64Mbit SDRAM and a connector for the Raspberry Pi board.

 

Because this is the latest in a series of developer boards from MATRIX Labs (see last year’s project: “$99 FPGA-based Vision and Sensor Hub Dev Board for Raspberry Pi on Indiegogo—but only for the next two days!”), there’s already a sophisticated, layered software stack for the MATRIX Voice platform that include a HAL (Hardware Abstraction Layer) with the FPGA code and C++ library, an intermediate layer with a streaming interface for the sensors and vision libraries (for the Raspberry Pi camera), and a top layer with the MATRIX OS and high-level APIs. Here’s a diagram of the software stack:

 

 

MATRIX Voice Software Stack.jpg 

 

And now, who better to describe this project than the originators:

 

 

 

 

 

 

 

 

Innovative Integration’s XA-160M PCIe XMC module suits applications that require high-speed data acquisition and real-time signal processing. The module provides two 16-bit TI ADC16DV160 160Msamples/sec ADCs for high-speed analog input signals and an Analog Devices AD9122 16-bit dual DAC capable of operating at 1200Msamples/sec for driving high-speed analog outputs. The ADCs and DACs are coupled to a Xilinx Artix-7 XC7A200T FPGA (that’s the largest member of the Artix-7 device family with 215,360 logic cells and 740 DSP48 slices). The FPGA also manages 1Gbyte of on-board DDR3 SDRAM and implements PCIe Gen2 x4 and Aurora interfaces for the XMC module’s P15 and P16 connectors respectively. You can use the uncommitted programmable logic on the Artic-7 FPGA for high-speed, real-time signal processing.

 

 

 

Innovative Integration XA-160M XMC Module.jpg

 

Innovative Integration’s XA-160M PCIe XMC Module

 

 

 

Here’s a block diagram that makes everything clear:

 

 

Innovative Integration XA-160M XMC Module Block Diagram.jpg

 

 

Innovative Integration’s XA-160M PCIe XMC Module Block Diagram

 

 

 

Innovative Integrations suggest that you might want to consider using the XA-160M PCIe XMC Module for:

 

 

  • Stimulus/response measurements
  • High-speed servo controls
  • Arbitrary Waveform Generation
  • RADAR
  • LIDAR
  • Optical Servo
  • Medical Scanning

 

 

You can customize the logic implemented in the Artix-7 FPGA using VHDL and MathWorks’ MATLAB using the company’s FrameWork Logic toolset. The MATLAB BSP supports real-time, hardware-in-the- loop development using the graphical Simulink environment along with the Xilinx System Generator for DSP, which is part of the Xilinx Vivado Design Suite.

 

Please contact Innovative Integrations directly for more information about the XA-160M PCIe XMC Module.

 

 

 

Red Pitaya has been offering its namesake, Zynq-SoC-based, open instrumentation platform as a packaged STEMlab with analog and digital probes, power supply and enclosure. The STEMlab prices range from €249.00 to €499.00 depending on options. To support this hardware, Red Pitaya is distributing apps and the latest turns the STEMlab into a combo 40MHz digital oscilloscope and 50MHz signal generator.

 

Here are the scope specs:

 

 

Red Pitaya STEMlab Scope Specs.jpg

 

 

Red Pitaya STEMlab DSO Specs

 

 

 

And here are the signal generator specs:

 

 

 

Red Pitaya STEMlab Signal Generator Specs.jpg

 

 

Red Pitaya STEMlab Signal Generator Specs

 

 

 

For more articles about the Zynq-based Red Pitaya, see:

 

 

 

The Koheron SDK and Linux distribution, based on Ubuntu 16.04, allows you to prototype working instruments for the Red Pitaya Open Instrumentation Platform, which is based on a Xilinx Zynq All Programmable SoC. The Koheron SDK outputs a configuration bitstream for the Zynq SoC along with the requisite Linux drivers, ready to run under the Koheron Linux Distribution. You build the FPGA part of the Zynq SoC design by writing the code in Verilog using the Xilinx Vivado Design Suite and assembling modules using TCL.

 

The Koheron Web site already includes several instrumentation examples based on the Red Pitaya including an ADC/DAC exerciser, a pulse generator, an oscilloscope, and a spectrum analyzer. The Koheron blog page documents several of these designs along with many experiments designed to be conducted using the Red Pitaya board. If you’re into Python as a development environment, there’s a Koheron Python library as well.

 

There’s also a quick-start page on the Koheron site if you’re in a hurry.

 

 

 

Red Pitaya Open Instrumentation Platform small.jpg 

 The Red Pitaya Open Instrumentation Platform

 

 

 

For more articles about the Zynq-based Red Pitaya, see:

 

 

 

 

Last September, Xilinx announced the six members of the 28nm Spartan-7 FPGA family for “cost-sensitive” designs (that’s marketing-speak for “low-cost”) and for designs that require small-footprint devices. (The two smallest members of the Spartan-7 family will be offered in 8x8mm CPGA196 packages with 100 user I/O pins.)

 

 

Spartan-7 FPGA Family Table v2.jpg

 

 

 

There’s a new 15-minute video with a quick technical overview of the Spartan-7 family:

 

 

 

 

And you can download the 50-page Advance Product Specification here.

 

A Very Short Conversation with Ximea about Subminiature Video Cameras and Very Small FPGAs

by Xilinx Employee ‎02-02-2017 01:00 PM - edited ‎02-02-2017 09:54 PM (1,803 Views)

 

Yesterday at Photonics West, my colleague Aaron Behman and I stopped by the Ximea booth and had a very brief conversation with Max Larin, Ximea’s CEO. Ximea makes a very broad line of industrial and scientific cameras and a lot of them are based on several generations of Xilinx FPGAs. During our conversation, Max removed a small pcb from a plastic bag and showed it to us. “This is the world’s smallest industrial camera,” he said while palming a 13x13mm board. It was one of Ximea’s MU9 subminiature USB cameras based on a 5Mpixel ON Semiconductor (formerly Aptina) MT9P031 image sensor. Ximea’s MU9 subminiature camera is available as a color or monochrome device.

 

Here’s are front and back photos of the camera pcb:

 

 

Ximea MU9 Subminiature Camera.jpg

 

Ximea 5Mpixel MU9 subminiature USB camera

  

 

As you can see, the size of the board is fairly well determined by the 10x10mm image sensor, its bypass capacitors, and a few other electronic components mounted on the front of the board. Nearly all of the active electronics and the camera’s I/O connector are mounted on the rear. A Cypress CY7C68013 EZ-USB Microcontroller operates the camera’s USB interface and the device controlling the sensor is a Xilinx Spartan-3 XC3S50 FPGA in an 8x8mm package. FPGAs with their logic and I/O programmability are great for interfacing to image sensors and for processing the video images generated by these sensors.

 

Our conversation with Max Larin at Photonics West got me to thinking. I wondered, “What would I use to design this board today?” My first thought was to replace both the Spartan-3 FPGA and the USB microcontroller with a single- or dual-core Xilinx Zynq SoC, which can easily handle all of the camera’s functions including the USB interface, reducing the parts count by one “big” chip. But the Zynq SoC family’s smallest package size is 13x13mm—the same size as the camera pcb—and that’s physically just a bit too large.

 

The XC3S50 FPGA used in this Ximea subminiature camera is the smallest device in the Spartan-3 family. It has 1728 logic cells and 72Kbits of BRAM. That’s a lot of programmable capability in an 8x8mm package even though the Spartan-3 FPGA family first appeared way back in 2003. (See “New Spartan-3 FPGAs Are Cost-Optimized for Design and Production.”)

 

There are two newer Spartan FPGA families to consider when creating a design today, Spartan-6 and Spartan-7, and both device families include multiple devices in 8x8mm packages. So I decided see how much I might pack into a more modern FPGA with the same pcb real-estate footprint.

 

The simple numbers from the data sheets tell part of the story. A Spartan-3 XC3S50 provides you with 1728 logic cells, 72Kbits of BRAM, and 89 I/O pins. The Spartan-6 XCSLX4, XC6SLX9, and XCSLX16 provide you with 3840 to 14,579 logic cells, 216 to 576Kbits of BRAM, and 106 I/O pins. The Spartan-7 XC7S6 and XC7S15 provide 6000 to 12,800 logic cells, 180 to 360Kbits of BRAM, and 100 I/O pins. So both the Spartan-6 and Spartan-7 FPGA families provide nice upward-migration paths for new designs.

 

However, the simple data-sheet numbers don’t tell the whole story. For that, I needed to talk to Jayson Bethurem, the Xilinx Cost Optimized Portfolio Product Line Manager, and get more of the story. Jayson pointed out a few more things.

 

First and foremost, the Spartan-7 FPGA family offers a 2.5x performance/watt improvement over the Spartan-6 family. That’s a significant advantage right there. The Spartan-7 FPGAs are significantly faster than the Spartan-6 FPGAs as well. Spartan-6 devices in the -1L speed grade have a 250MHz Fmax versus 464MHz for Spartan-7 -1 or -1L parts. The fastest Spartan-6 devices in the -3 speed grade have an Fmax of 400MHz (still not as fast as the slowest Spartan-7 speed grade) and the fastest Spartan-7 FPGAs, the -2 parts, have an Fmax of 628MHz. So if you feel the need for speed, the Spartan-7 FPGAs are the way to go.

 

I’d be remiss not to mention tools. As Jayson reminded me, the Spartan-7 family gives you entrée into the world of Vivado Design Suite tools. That means you get access to the Vivado IP catalog and Vivado’s IP Integrator (IPI) with its automated integration features. These are two major benefits.

 

Finally, some rather sophisticated improvements to the Spartan-7 FPGA family’s internal routing architecture means that the improved placement and routing tools in the Vivado Design Suite can pack more of your logic into Spartan-7 devices and get more performance from that logic due to reduced routing congestion. So directly comparing logic cell numbers between the Spartan-6 and Spartan-7 FPGA families from the data sheets is not as exact a science as you might assume.

 

The nice thing is: you have plenty of options.

 

 

For previous Xcell Daily blog posts about Ximea industrial and scientific cameras, see:

 

 

 

 

 

Late last year at the SPS Drives show in Germany, BE.services demonstrated a vertically integrated solution stack for industrial controls running on a Zynq-based Xilinx ZC702 Eval Kit. The BE.services industrial automation stack for Industry 4.0 and IIoT applications includes:

 

 

  • Linux plus OSADL real-time extensions
  • POWERLINK real-time, industrial Ethernet
  • CODESYS SoftPLC
  • Matrikon OPC UA machine-to-machine protocol for industrial automation
  • Ethernet TSN (time-sensitive networking) with hardware IP support from Xilinx
  • Kaspersky security

 

 

This stack delivers the four critical elements you need when developing smart manufacturing controllers:

 

  • Smart control
  • Real-time operation
  • Connectivity
  • Security

 

 

Industry 4.0 elements.jpg 

 

 

Here’s a 3-minute demo of that system with explanations:

 

 

 

 

The fundamental advantage to using a Xilinx Zynq SoC with its on-chip FPGA array for this sort of application is deterministic response, explains Dimitri Philippe in the video. Philippe is the founder and CEO of BE.services. Programmable logic delivers this response with hardware-level latencies instead of software latencies that are orders of magnitude slower.

 

 

Note: You can watch a recent Xilinx Webinar on the use of OPC UA and TSN for IIoT applications by clicking here.

 

 

Adam Taylor’s MicroZed Chronicles, Part 166: Interfacing to Servos – Generating PWM signals

by Xilinx Employee ‎01-23-2017 09:51 AM - edited ‎01-30-2017 08:56 AM (2,255 Views)

 

By Adam Taylor

 

Many embedded systems are required to generate PWM (pulse-width modulated) signals, so I thought it would be a good idea to explain how you can do this using the Zynq SoC. I am going to show how to interface with a servomotor as a real-world example. Servomotors are used in applications such as robotics and advanced manufacturing. The way we interface with different servomotors varies depending on use, type, and cost. A servomotor interface can be either digital—in the form of a CAN or Modbus connection—or analog—like a PWM signal as mentioned above.

 

For this example, we are going to interface to a simple RC servo, which uses an analog PWM signal to drive the motor’s position. We can generate PWM signals directly from the Zynq SoC’s PS (processor system) using the Zynq SoC’s TTC (Triple Timer Counter).

 

Reviewing the TTC’s architecture, you will see that each of the three timers can generate a Wave-Out. For the first TTC, Wave-Out can be output either via the MIO or EMIO pins on the Zynq SoC, but the remaining two counters can only output this signal via EMIO.

 

 

Image1.jpg

 

 

For this example, we will be using the MicroZed board mounted on an I/O Carrier Card with the servos connected to one of the carrier card’s PMOD ports. Therefore, I will use the EMIO pins for all 6 TTC wave outputs. This means that we can drive up to 6 servos. In this example, I will use only two servos. Two servos allow me to demonstrate that I can use the software to ensure that the servos do not conflict with each other. Scaling up from two servos to six, or any number in between is then very straightforward.

 

As always, the first thing we need to do is create a hardware design within Vivado. After three years of the MicroZed Chronicles, you should be very familiar with this process. Create a new project in Vivado and add a block diagram with the Zynq Processing System contained within. With this added, we can quickly and easily set up the two available TTCs and enable the wave outs to be routed via the EMIO. It is then a simple case of directing these wave outputs to the Zynq PL’s I/O using a constraints file.

 

 

 

Image2.jpg

 

 

Enabling the TTC’s and directing them to EMIO

 

 

Image3.jpg 

A very simple Zynq system for driving the Servo

 

 

 

With the bit file generated and exported for use in SDK, all we need to do now is understand how to drive the servo. We know it will use the PWM output for control, but we don’t know the specifics. The interface with the servo is very simple. There are three wires connected to most RC servos, two of these are power and ground while the third is the control wire uses to command the servo’s position. We use this wire to drive the motor to a position between -90 and +90 degrees, with 0 degrees being the neutral position. We command the servo to change its position by generating a PWM signal with a 20msec period between pulses. The pulse’s duty cycle is very low, varying from 2.5% to 12.5%.  If we wish the servo to remain in the neutral position of 0 degrees, we must generate a 1.5msec pulse every 20msec. Moving to either extreme of +/-90 degrees is possible by either adding or subtracting 1msec to the pulse width used to achieve the neutral position.

 

In the next blog we will look at the software we need to generate this interface.

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

 

 

 

 

 

 

 

 

 

 

Three years ago, Opal Kelly introduced an enhanced version of its XEM6310 USB 3.0 FPGA module based on the Xilinx Spartan-6 FPGA family. (See “Need a fast way to develop complex, real-time USB 3.0 peripheral devices? This 60x75mm FPGA-based module can help.”) Now, the company has revamped the product using Xilinx Artix-7 FPGAs to create the XEM7310 USB 3.0 integration module.

 

 

Opal Kelly XEM7001 based on Xilinx Artix-7 FPGA.jpg

 

 

Opal Kelly’s new XEM7310 USB 3.0 Integration Module is based on the Xilinx Artix-7 FPGA family

 

 

 

It has the same, diminutive 50x75mm footprint and the same set of dual 80-pin high-density connectors on the bottom, but the silicon resources on the board have been seriously upgraded.

 

First, the two available Artix-7 FPGAs on the XEM7310 board (Artix-7 A75 or A200 devices) provide significantly more programmable-logic resources than the Spartan-6 devices available on the older XEM6310. Here’s a chart showing what the XEM7310 provides:

 

 

Opal Kelly XEM7310 FPGA Chart.jpg

 

 

Next, the XEM7310 board incorporates eight times as much SDRAM (1Gbyte of DDR3 SDRAM versus 128Mbytes of DDR2 SDRAM).

 

The XEM7310 board’s system clock is 200MHz rather than 100MHz for the older product.

 

One thing’s similar: the new board has complete APIs in C, C++, C#, Ruby, Python, and Java like the earlier XEM6310.

 

The XEM7310-A75 sells for $549.95 and the XEM7310-A200 sells for $799.95, in unit quantities.

 

 

Avnet has just announced the 1x1 version of its PicoZed SDR 2x2 SOM that you can use for rapid development of software-defined radio applications. The 62x100mm form factor for the PicoZed SDR 1x1 SOM is the same as that used for the 2x2 version but the PicoZed SDR 1x1 SOM uses the Analog Devices AD9364 RF Agile Transceiver instead of the AD9361 used in the PicoZed SDR 2x2 SOM. Another difference is that the 2x2 version of the PicoZed SDR SOM employs a Xilinx Zynq Z-7035 SoC and the 1x1 SOM uses a Zynq Z-7020 SoC.

 

 

 

Avnet PicoZed SDR 1x1.jpg

 

 

Avnet’s Zynq-based PicoZed SDR 1x1 SOM

 

 

 

One final difference: The Avnet PicoZed SDR 1x1 sells for $549 and the PicoZed SDR 2x2 sells for $1095. So if you liked the idea of the original PicoZed SDR SOM but wished for a lower-cost entry point, your wish is granted, with immediate availability.

 

 

 

By Adam Taylor

 

 

As I discussed last week, one method we can use to reduce the power is to put the Zynq SoC in low-power mode when we detect that the system is idle. The steps required to enter and leave the low-power mode appear in the technical reference manual (section 24.4 of UG585). However, it’s always good to see an actual example to understand the power savings we get by entering this mode.

 

We’ll start with a running system. The current draw (344.9mA) appears on the DMM’s display in the upper left part of this image:

 

 

Image1.jpg 

 

MicroZed with the DMM measuring current

 

 

 

We follow these steps from the TRM to place the Zynq SoC’s ARM Cortex-A9 processor into sleep mode:

 

  1. Configure the wake-up source. In this case, it’s the GPIO pushbutton.
  2. Enable standby mode and dynamic clock gating.
  3. Place the DDR SDRAM into self-refresh mode.
  4. Place the PLLs into bypass mode before powering them down and setting the clock divisor.
  5. Execute the WFI Instruction to wait for the wake-up signal.

 

 

Implementing most of these steps requires that we use the standard XilOut32() approach to modify the desired register contents as we have done for many examples throughout this blog series. There are however some registers we need to interact with using inline assembly language. We will now look at this in more detail because it is a little different to using the XilOutXX() fucntions.

 

We need to use assembler for two reasons. The first is to interact with the CP15 co-processor registers and the second is to execute the wait for interrupt, wfi() instruction. You will notice that the CP15 registers are not defined within the TRM. As such no address-space details are provided. However we can still access them from within our SDK application.

 

We’ll use a bare-metal approach to demonstrate how we enter sleep mode. The generated BSP will provide the functions and macros to interact with the Zynq SoC’s CP15 registers. There are three files that we need to use:

 

  • h – Selects the correct header file for the tool chain being used
  • h - Contains macros for using inline assembler within the GNU tool chain
  • h – Contains definitions of all registers within the Zynq SoC

 

We can use two macros contained within xpseudo_asm_gcc.h to perform the writes we need to make to the CP15 power-control register. These are the macros MFCP, which allows us to read a register, and MFCP, which allows us to write to a register. We can find the register address within the file xreg_cortexa9.h to target the register we want to interact with, as shown in the image below:

 

 

Image2.jpg 

 

Actual code within the power-down application

 

 

 

The last element we need is the final WFI instruction to wait for the wake-up source interrupt. Again, we use inline assembler, just as we did previously when we issued the SEV instruction to wake up the second processor as part of the AMP example.

 

 

 

Image3.jpg

 

 

Defining the WFI instruction



When I put all of this together and ran the code on the MicroZed dev board, I noted a 100 mA drop in the overall current draw, which equates to a 29% drop in power—from 1.72W to 1.22W. You can see the overall effect in this image. Note the new reading on then DMM.

 

 

Image4.jpg

 

 

Resultant current consumption after entering sleep mode

 

 

 

This is a considerable reduction in power. However, you may be surprised it is not more. Remember that we still have elements of the Zynq SoC powered up. We can power these elements down as well, to achieve an even lower power dissipation. For example, we can power-down the Zynq SoC’s PL. While powering down the PL results in a longer wake-up time as the PL would need to be reconfigured after waking up, the resultant power saving would be greater. This does require that we correctly architect the power architecture to provide the ability to power down specific voltage rails.

 

Next week we will look at how we can develop our PL application for lower power dissipation in operation.

 

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

 

 

 

 

 

As a winter-break project, Edward at MicroCore Labs ported his FPGA-based MCL86 8088 processor core to a vintage IBM PCjr. The MCL86 core combined with the minimum-mode 8088 BIU consumes 1.5% of the smallest Kintex-7 FPGA, the K70T. Four of the Kintex-7 FPGA’s 135 block RAMS hold the processor’s microcode. Edward disabled the cycle accuracy governor in the core and added 128Kbytes of internal RAM using the same Kintex-7 FPGA that he was using to implement the processor. The result: the world's fastest IBM PCjr running Microsoft DOS 2.1.

 

 

 

IBM PCjr sped up by Kintex-7.jpg

 

 

IBM PCjr sped up by a MicroCore Labs MCL86 processor core implemented in a Kintex-7 FPGA

 

 

 

Will the world beat a path to Edward’s door for fast antiquated personal computers? Probably not. The PCjr, code name “Peanut,” was IBM’s ill-fated attempt to enter the home market with a cost-down version of the original IBM PC. When it was announced on November 1, 1983, the entire market quickly developed a “Peanut” allergy. Adjectives such as “toylike,” “pathetic,” and “crippled” were used to describe the PCjr in the press.

 

The machine’s worst feature, the one to come under the most criticism, was its “Chiclet” keyboard (named after a chewing gum with a shape similar to the keyboard’s keys). IBM had gone from making the world’s best keyboard on the IBM PC to the world’s worst on the PCjr. After a year and a half of sales that dropped off a cliff as soon as the discounts ended, IBM killed the machine.

 

So what’s the point of MicroCore’s Franken-Peanut then?

 

It nicely demonstrates the vast implementation power of even the smallest Xilinx FPGAs. MicroCore Labs’ MCL86 processor core easily fits in low-cost FPGAs from the Spartan-6 and Spartan-7 product lines.

 

Finally, here’s a very short video of MicroCore’s jazzed-up IBM PCjr playing a little Bach:

 

 

 

 

By Adam Taylor

 

I thought I would kick off the new year with a few blogs that look at the Zynq SoC’s power-management options. These options are important for many Zynq-based systems that are designed to run from battery power or other constrained power sources.

 

There are several elements of the design we can look at, from the system and board level down to the PS and PL levels inside of the Zynq SoC. At the system level, we can look at component selection. We can use low-voltage devices wherever possible because they will have a lower static power consumption. We can also use lower-power DRAM by selecting components like LPDDR in place of DDR2. One of the simpler choices would be selecting a single-core Zynq SoC as opposed to a dual-core device.

 

Within the Zynq SoC itself, there are several things we can do both within the PS and PL to reduce power. There are two categories we can consider when it comes to reducing power consumption in Zynq-based systems:

 

  1. Entering a low-power standby mode in which application execution is stopped. This is achieved by placing the Zynq PS in sleep mode and powering down the PL.
  2. Optimizing the PS and the PL to reduce power during operation.

 

The first option allows us to reduce the power consumption after we have detected that the system has been inactive for a period and should therefore enter a low-power mode to prolong operating life on a battery charge. The second option allows us to make the best use of the battery capacity while operating. I will demonstrate the savings to be had with the Zynq SoC’s sleep mode and how to enter it in a follow-up blog. For the moment, I want to look at what we can do within the Zynq SoC’s PS to reduce power consumption. Most of these techniques relate to how we configure the clocking architecture within the PS.

 

As you can see in the diagram below, the Zynq SoC’s clocking architecture is very flexible. We can use this flexibility to reduce the power consumption of the Zynq PS.

 

 

Image1.jpg

 

 

Zynq SoC Clocking Architecture

 

 

The first approach we can take is to trade off performance against power consumption. We can reduce the power consumption within the Zynq SoC’s PS simply by selecting a lower APU frequency. Of course, this also reduces APU performance. However, as engineers one of our roles is to understand the overall system requirements and balance them.  CMOS power dissipation is frequency dependent so we reduce power consumption by reducing the APU frequency, which has the potential to significantly reduce PS power dissipation. We can also use the same trade-off with the DDR SDRAM, trading memory bandwidth for reduced power.

 

 

Image2.jpg

 

 

Clock Configuration in the Zynq SoC – Reducing the APU frequency

 

 

 

Along with reducing the frequency of the APU, we can also implement a clocking scheme that reduces the number of PLLs used within the PS. The Zynq PS has three available PLLs named the ARM, IO, and DDR PLL. The clocking architecture allows downstream connections to use any one of the PLL sources, so a clocking scheme that uses fewer than all three PLLs results in lower power dissipation as unused PLLs can be disabled and their power consumption eliminated.

 

In addition, the application being developed may not require the use of all peripherals within the PS. We can therefore use the Zynq SoC’s clock-gating facilities to reduce power consumption by not clocking unused peripherals, further reducing the power consumption of the PS within the Zynq.

 

I performed a very simple experiment with a MicroZed board by inserting an ammeter into the USB power port supplying power to the MicroZed. This is a simple way to monitor the board’s overall power consumption. Running the Zynq PS alone with no design in the programmable logic, the MicroZed drew a current of 364mA @ 5V (1.825W) with the default MicroZed configuration.

I ran a few simple experiments to see the effect on the overall power consumption by reducing the clock frequency by half from 666MHz to 250MHz and then selecting the use of only one PLL—the DDR PLL—to clock the design. Running just from the DDR PLL reduced the current consumption to only 308mA, a 16% reduction. However, I did to have de-activate the unused PLL’s myself in my application. Reducing the frequency of the APU alone only reduced the overall current consumption to 345mA, a 6% reduction. So we see that turning off unused PLLs can have a big effect on power consumption.

 

If we want to gate the clocks to unused peripherals within the PS, we can use the Zynq SoC’s APER register to disable the clocks to that peripheral.

 

 

 

Image3.jpg

 

APER Control Register Bits

 

 

For a final experiment, I relocated the program to execute from the Zynq SoC’s on-chip RAM and disabled the DDR memory. For many applications, this may not be feasible but for some it may, so I thought it worthy of a test. Relocating the code further reduced the current consumption to 270mA (a 26% reduction) when combined with peripheral gating, APU frequency reduction, and running from one PLL alone.

 

Next time we will look at how we can place the processor into sleep mode.

 

Baumer’s new line of intelligent LX VisualApplets industrial cameras delivers image and video processing with image resolutions to 20Mpixels at high frame rates. The cameras contain sufficient FPGA-accelerated, image-processing power to perform real-time image pre-processing according to application-specific programming created using Silicon Software’s VisualApplets graphical programming environment. This pre-processing improves an imaging systems throughput and real-time response while reducing the amount of data uploaded to a host.

 

 

Baumer LX VisualApplets camera.jpg 

 

Baumer intelligent LX VisualApplets industrial camera

 

 

 

The Baumer LX VisualApplets cameras perform this pre-processing in camera using an internal Xilinx Spartan-6 LX150 FPGA and 256Mbytes of DDR3 SDRAM. The cameras support a GigE Vision interface over 100m of cable. Coincidentally, these new industrial cameras recently won a Platinum-level award in the Vision Systems Design 2016 Innovators Awards Program.

 

The LX VisualApplets industrial camera product family includes seven models with sensor resolutions ranging from 2Mpixels to 20Mpixels, all based on CMOSIS image sensors. Here’s a table listing details for the seven 2D and 3D models in the product family:

 

 

Baumer LX VisualApplets camera table.jpg 

 

 

Here’s a lighthearted, 3.5-minute whiteboard video that concisely describes the advantages of in-camera, FPGA-based, image-stream pre-processing:

 

 

 

 

 

By Adam Taylor

 

To wrap up this blog for the year, we are going to complete the SDSoC integration using the shared library.

 

To recap, we have generated a bit file using the Xilinx SDSoC development environment that implements the matrix multiply example using the PL (programmable logic) on the base PYNQ platform, which we previously defined using SDSoC. The final step is to get it all integrated and the first step is to upload the following files to the PYNQ board:

 

  • bit – The bit file for the programmable logic
  • tcl – The TCL file description of the block diagram with address ranges
  • so – The generated shared library

 

The names are slightly different as I generated them as part of the previous blog.

 

Using a program like WinSCP, I uploaded these three files to the PYNQ bit stream directory, the same place we uploaded our previous design too.

 

 

Image1.jpg

 

 

The next step is to develop the Jupyter notebook so that we can drive the new overlay that we have created. To get this up and running we need to do the following:

 

  • Download and verify the overlay
  • Create an MMIO class to interface with the existing block RAM which remains in the overlay
  • Create a CFFI class to interface with the shared library
  • Write a simple example to interface the overlay using the MMIO and CFFI classes

 

This is very similar to what we have done previously with the exception of the creating the CFFI, so that is where the rest of this blog will focus.

 

The first thing we need to do is know the names of the function within the shared library, because SDSoC will create a different name from the actual accelerated function. We can find the renamed files under <project>/<build config>/_sds/swstubs while the hardware files are under <project>/<build config>/_sds/p0/ipi.

 

If you already have the shared library on your PYNQ board, then you can use the command nm -D <path & shared library name> to examine its contents if you access the PYNQ via an SSH session.

 

With the name of the function known we can create CFFI class within our Jupyter note book. In the class for this example we need to create two functions: one for initialization and another to interact with the library. The more complicated of the two is the initialization under which we must define the location of the shared library within the file system. As mentioned earlier, I have uploaded the shared library to the same location as the bit and TCL files. We also need to declare the functions contained within the shared library and the finally open the shared library.

 

 

Image2.jpg 

 

The second function within the class is what we call when we wish to make use of the shared library. We can then make use of this class as we do any other within the rest of our program. In fact, this approach is used often in Python development to bind together C and Python.

 

This example shows just how easily we can create overlays using SDSoC and interface with them using Python and the PYNQ development system. If you want to try and you currently do not have a license for SDSoC, you can obtain a free 60 day evaluation here with the new release.

 

As I mentioned up top this is the last blog of 2016, I will resume writing in the New Year and to give you a taste of what we are going to be looking at in 2017. Amongst other things I will be featuring:

 

  • UltraZed-EG
  • OpenAMP
  • Image Processing using the PYNQ
  • Advanced sensor interfacing techniques using the Avnet EVK
  • Interfacing to Servos and robotics

 

Until then, have a great Christmas and New Year and thanks for reading the series.

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 MicroZed Chronicles Second Year.jpg

 

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

 

 

 

This unlikely new project on the Instructables Web site uses a $189 Digilent ZYBO trainer board (based on a Xilinx Zynq Z7010 SoC) to track balloons with an attached Webcam and then pop them with a high-powered semiconductor laser. The tracking system is programmed with OpenCV.

 

Here’s a view down the bore of the laser:

 

Laser Balloon Popper.jpg 

 

And there’s a 1-second video of the system in action on the Instructables Web page.

 

Fun aside, this system demonstrates that even the smallest Zynq SoC can be used for advanced embedded-vision systems. You can get more information about embedded-vision systems based on Xilinx silicon and tools at the new Embedded Vision Developer Zone.

 

Note: For more information about Digilent’s ZYBO trainer board, see “ZYBO has landed. Digilent’s sub-$200 Zynq-based Dev Board makes an appearance (with pix!)

 

 

Upgrade your Amiga PC to HDMI with this FPGA-based video time machine

by Xilinx Employee ‎12-05-2016 04:11 PM - edited ‎12-31-2016 05:46 AM (8,140 Views)

 

Lukas F Hartmann is a Time Lord with one special power: he can bestow 1280x720-pixel HDMI compatibility to the Commodore Amiga PC family, which is now 30 years old and predates the HDMI standard by about 20 years. His original Amiga, known in its time as having truly groundbreaking video graphics, was limited to 640x256 pixels and 64 colors.

 

Hartmann accomplished this feat of time travel by designing and prototyping a new video board based on a Xilinx Spartan-6 FPGA that plugged into the Amiga PC’s video slot. (See “A High-Performance Graphics Card for the Amiga 2000! In 2016??? A very strange Spartan-6 FPGA story” for additional details about the development of the prototype.)

 

Now, Hartmann is in production with his VA2000 video board and you can get a copy of his open-source design for €189.00 through his company, MNT. (Or you could, if it wasn’t already sold out. Wait! Wait! Batch 2 on pre-order sale here.) Here’s a photo of the VA2000 board:

 

 

MNT VA2000 Production Card.jpg 

 

 

And here’s a photo showing an Amiga 2000 driving an HDMI display at 1280x720:

 

 

 

MNT VA2000 Image Display.jpg

 

 

Seeing a vintage PC from the 1980s driving a high-res HDMI display is sort of thrilling for a student of old PCs like me. More to the point: Hartmann’s use of an FPGA to bring a perfectly good product forward in time with enhanced capabilities is one of the things that FPGAs do really well because they can easily implement older interfaces like the Amiga’s 68K bus, they can implement new interfaces like HDMI, and they can implement complex logic like a graphics accelerator—and all on one chip. And yes, you can take that FPGA-based design right into production, especially if you’re using a cost-effective device like the Spartan-6 FPGA.

 

 

 

 

 

 

 

I wrote about MicroCore Labs’ MCL51, a microsequencer-based 8051 processor core, back in June (see “How to stuff a wild FPGA: Quad-core 8051 in a $99 Artix-7 Arty dev board multitasks by driving display, printer, music, and sound”) and now, MicroCore has harnessed four instances of these cores running on an Artix-7 FPGA in lockstep to create an n-modular redundant system. This system can detect a variety of soft errors and rebuilds itself when an error is detected. Each of the four processor modules contains independent voting logic that detects errors, so that the failed module can be removed and possibly rebuilt. Successful rebuilding and rejoining the 4-processor gestalt takes 700μsec. (You can get more details in this Microcore app note.)

 

Here’s a video of the system in operation with some heavy-handed error injection to demonstrate the error-recovery capability of the design:

 

 

 

 

 

MicroCore’s MCL51 processor core consumes only about 300 LUTs, so four instances easily fit in the Artix-7 A35T FPGA on the $99 Digilent ARTY dev board used in the video. This board is a real bargain because it includes a voucher for a device-locked copy of the Xilinx Vivado HL Design Edition, which lists for $2995.

 

Adam Taylor's MicroZed Chronicles Part 155: Introducing the PYNQ (Python + Zynq) Dev Board

by Xilinx Employee ‎11-06-2016 03:59 PM - edited ‎11-11-2016 11:41 AM (10,345 Views)

 

By Adam Taylor

 

Having recently received a Xilinx/Digilent PYNQ Dev Board, I want to spend some time looking at this rather exciting Zynq-based board. For those not familiar with the PYNQ, it combines the capability of the Zynq and the productivity of the Python programming language and it comes in a rather catching pink color.

 

 

Image1.jpg

 

PYNQ up and running on my desk

 

 

Hardware-wise, PYNQ incorporates an on-board Xilinx Zynq Z-7020 SoC, 512Mbytes of DDR SDRAM, HMDI In and Out, Audio In and Out, two PMOD ports, and support for the popular Arduino Interface Header. We can configure the board from either the SD card or QSPI. On its own, PYNQ would be a powerful development board. However, there are even more exciting aspects to this board that enable us to develop applications that use the Zynq SoC’s Programmable Logic.

 

The Zynq SoC runs a Linux kernel with a specific package that supports all of the PYNQ’s capabilities. Using this package, it is possible to place hardware overlays (in reality bit files developed in Vivado) in to the programmable logic of the Zynq.

 

The base PYNQ supports all of the PYNQ interfaces as shown below:

 

 

Image2.jpg

 

PYNQ PL hardware overlay

 

 

Within the supplied software environment, the PYNQ hardware and interfaces are supported by the Pynq Package. This package allows you to use the Python language to drive PYNQ’s GPIO, video, and audio interfaces along with a wide range of PMOD boards. We use this package within the code we have developed and documented using the Jupyter note book, which is the next part of the PYNQ framework.

 

As engineers, we ought to be familiar with the Python Language and Linux, even if we are not experts in either. However, we may be unfamiliar with Jupyter notebooks. These are Web-based, interactive environments that allow us to run code, widgets, document, plots, and even video within the Jupyter notebook Web pages.

 

A Jupyter notebook server runs within the Linux kernel that’s running on the PYNQ’s Zynq SoC. We use this interface to develop our PYNQ applications. Jupyter notebooks and overlays are the core of the PYNQ development methodology and over the next series of blogs we are going to explore how we can use these notebooks and overlays and even develop our own as required.

 

Let’s look at how we can power up the board and get our first “hello world” program running. We’ll develop a simple program that allows us to understand the process flow.

 

The first thing to do is to configure an SD card with the latest kernel image, which we can download from here. With this downloaded, the next step is to write the ISO file to the SD card using an application like Win Disk Imager (if we are using Microsoft Windows).

 

Insert the SD card into the PYNQ board (check that the jumper is set for SD boot) and connect a network cable to the Ethernet port. Power the board up and, once it boots, we can connect to the PYNQ board using a browser.

 

In a new browser window enter the address http://pynq:9090, which will take us to a log-on page where we enter the username Xilinx. From there we will see the Juypter notebook’s welcome page:

 

 

Image3.jpg

The PYNQ welcome page

 

 

Clicking on “Welcome to Pynq.ipynb” will open a welcome page that tells us how to navigate around the notebook and where to find supporting material.

 

For this example, we are going to create our own very simple example to demonstrate the flow, as I mentioned earlier. Again, we run the Python programs from within the Juypter notebook. We can see which programs we currently have running on the PYNQ by clicking on the “Running” tab, which is present on most notebook pages. Initially we have no notebooks running, so clicking on it right now will only show us that there are no running notebooks.

 

 

Image4.jpg

Notebooks running on the PYNQ

 

 

To create your own example, click on the examples page and then click on “New.” Select “notebooks Python 3” from the icon on the right:

 

 

Image5.jpg 

Creating a new notebook

 

 

This will create a new notebook called untitled. We can change the name to whatever we desire by clicking on “untitled,” which will open a dialog box to allow us to change the name. I am going to name my example after the number of this MicroZed Chronicles blog post (Part 155).

 

 

Image6.jpg

 

Changing the name of the Notebook

 

 

The next thing we wish to do is enter the code we wish to run on the PYNQ. Within the notebook, we can mark text as either Code, Markdown, Heading, or Raw NBConvert.

 

 

Image7.jpg

 

We can mark text as either Code, Markdown, Heading, or Raw NBConvert

 

 

For now, select “code” (if it is not already selected) and enter the code: print(“hello world”)

 

 

 

Image8.jpg

 

The code to run in the notebook

 

 

We click the play button to run this very short program. With the box selected and all being well, you will see the result appear as below:

 

 

Image9.jpg

 

Running the code

 

 

Image10.jpg

 

 Result of Running the Code

 

 

If we look under the running tab again, we will see that this time there is a running application:

 

 

Image11.jpg


Running Notebooks

 

 

 

If we wish to stop the notebook from running then we click on the shutdown button.

 

Next time, we will look at how we can use the PYNQ in more complex scenarios.

 

We can also use the PYNQ board as a traditional Zynq based development board if desired. This makes the PYNQ one of the best dev board choices available now.

 

Note, you can also log on to the PYNQ board using a terminal programme like PuTTY with the username and password Xilinx.

 

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 

 MicroZed Chronicles Second Year.jpg

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

 

 

 

 

 

Xilinx now has four families of cost-optimized All Programmable devices to help you build systems:

 

 

 

  • Artix-7: The cost-optimized Xilinx FPGA family with 6.6Gbps serial transceivers for designs that need compatibility with high-speed serial I/O protocols such as PCIe Gen2 and USB 3.0. The Artix-7 family now has two smaller members: the Artix-7 A12T and A25T with 12,300 and 23,360 logic cells respectively.

 

 

 

Here’s a 12-minute video that further clarifies the options you now have:

 

 

 

 

Adam Taylor’s MicroZed Chronicles, Part 154: SDSoC Tracing

by Xilinx Employee on ‎10-31-2016 09:48 AM (5,740 Views)

 

By Adam Taylor

 

Using the Xilinx SDSoC Development Environment allows us to create a complex system with functions in both the Zynq SoC’s PS and the PL (the Processor System and the Programmable Logic). We need to see the interaction between the PS and PL to achieve the best system performance and optimization. This is where tracing comes in. Unlike AXI profiling, which enables us to ensure we are using the AXI links between the PS and PL optimally, tracing shows you the detailed interaction taking place between the software and hardware acceleration. To do this SDSoC instruments the design and includes several additional blocks to trace events within the PL design, enabling tracing for the global solution.

 

We can enable tracing using the SDSoC Project overview: select the Enable Event Tracing check box and set the active build to SDDebug. We are then able to build our application with the trace functionality built in, which ensures that there are no issues. I recommend that you clean the build first (Project-> Clean).

 

 

Image1.jpg 

 

Using SDSoC Project Overview to configure a build with built-in trace

 

 

For this blog post, I am going to target the Avnet ZedBoard with standalone OS and use a matrix multiplication example.

 

Once SDSoC has generated the build files, we need to execute and run the trace design from within SDSoC itself. We need to connect both the JTAG and UART to our development PC to run things from within the SDSoC environment. If we are using a Linux operating system, then we also need to connect the Ethernet port, to the same network connected to the development PC.

 

Power up the target board and then, within SDSoC project Explorer, expand your working project. Beneath the SDDebug folder, right click on the resultant ELF file, then select debug and Trace application.

 

 

Image2.jpg

 

 

Executing the trace application

 

 

This will then program the bit file into the target board, execute the instrumented design, capture the trace results, and upload them to SDSoC. If we have a terminal window connected to the target board to capture the UART’s output, we will see the results of the program being executed:

 

 

Image3.jpg

 

 

UART results of the program

 

 

 

The trace application executes within SDSoC and uploads a trace log and a graphical view of this log. This log shows the starting and stoping of the software, data transfers, and hardware acceleration:

 

 

Image4.jpg

 

SDSoC Trace Log

 

 

Image5.jpg

 

 

Results of tracing an example application

(Orange = Software, Green = Accelerated function and Blue = Transfer)

 

 

 

Running the trace creates a new project containing the trace details, which can be seen within the project explorer.

 

 

Image6.jpg 

 

New Project containing trace data

 

 

Tracing instruments the PL side of the design so I ran the same build with and without tracing enabled to compare the differences in resource allocation. The results appear below:

 

 

Image7.jpg

 

 

Resource Utilization with Tracing enabled (left) and Normal Build (right)

 

 

There is a small difference between the two builds but implementing trace within the design does not significantly increase the resources required.

 

I also made a short video for this blog post. Here it is:

 

 

 

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

 

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.