UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

 

This week at the Embedded Vision Summit in Santa Clara, CA, Mario Bergeron demonstrated a design he’d created that combines real-time visible and IR thermal video streams from two different sensors. (Bergeron is a Senior FPGA/DSP Designer with Avnet.) The demo runs on an Avnet PicoZed SOM (System on Module) based on a Xilinx Zynq Z-7030 SoC. The PicoZed SOM is the processing portion of the Avnet PicoZed Embedded Vision Kit. An FMC-mounted Python-1300-C image sensor supplies the visible video stream in this demo and a FLIR Systems Lepton image sensor supplies the 60x80-pixel IR video stream. The Lepton IR sensor connects to the PicoZed SOM over a Pmod connector on the PicoZed.

 

Here’s a block diagram of this demo:

 

 

Avnet reVISION demo with PicoZed Embedded Vision Kit.jpg 

 

 

Bergeron integrated these two video sources and developed the code for this demo using the new Xilinx reVISION stack, which includes a broad range of development resources for vision-centric platform, algorithm, and application development. The Xilinx SDSoC Development Environment and the Vivado Design Suite including the Vivado HLS high-level synthesis tool are all part of the reVISION stack, which also incorporates OpenCV libraries and machine-learning frameworks such as Caffe.

 

In this demo, Bergeron’s design takes the visible image stream and performs a Sobel edge extraction on the video. Simultaneously, the design also warps and resizes the IR Thermal image stream so that the Sobel edges can be combined with the thermal image. The Sobel and resizing algorithms come from the Xilinx reVISION stack library and Bergeron wrote the video-combining code in C. He then synthesized these three tasks in hardware to accelerate them because they were the most compute-intensive tasks in the demo. Vivado HLS created the hardware accelerators for these tasks directly from the C code and SDSoC connected the accelerator cores to the ARM processor with DMA hardware and generated the software drivers.

 

Here’s a diagram showing the development process for this demo and the resulting system:

 

 

Avnet reVISION demo Project Diagram.jpg 

 

 

In the video below, Bergeron shows that the unaccelerated Sobel algorithm running in software consumes 100% of an ARM Cortex-A9 processor in the Zynq Z-7030 SoC and still only achieves about one frame/sec—far too slow. By accelerating this algorithm in the Zynq SoC’s programmable logic using SDSoC and Vivado HLS, Bergeron cut the processor load by more than 80% and achieved real-time performance. (By my back-of-the envelope calculation, that’s about a 150x speedup: going from 1 to 30 frames/sec and cutting the processor load by more than 80%.)

 

Here’s the 5-minute video of this fascinating demo:

 

 

 

 

 

 

For more information about the Avnet PicoZed Embedded Vision Kit, see “Avnet’s $1500, Zynq-based PicoZed Embedded Vision Kit includes Python-1300-C camera and SDSoC license.”

 

 

For more information about the Xilinx reVISION stack, see “Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge,” and “Yesterday, Xilinx announced the reVISION stack for software-defined embedded-vision apps. Today, there’s two demo videos.”

 

How to tackle KVM (Keyboard/Video/Mouse) challenges at 4K and beyond: Any Media Over Any Network

by Xilinx Employee ‎05-04-2017 11:05 AM - edited ‎05-04-2017 11:08 AM (1,284 Views)

 

We’ve had KVM (keyboard, video, mouse) switches for controlling multiple computers from one set of user-interface devices for a long, long time. Go back far enough, and you were switching RS-232 ports to control multiple computers or other devices with one serial terminal. Here’s what they looked like back in the day:

 

 

Old KVM Switch.jpg 

 

 

In those days, these KVM switches could be entirely mechanical. Now, they can’t. There are different video resolutions, different coding and compression standards, there’s video over IP (Ethernet), etc. Today’s KVM switch is also a many-to-many converter. Your vintage rotary switch isn’t going to cut it for today’s Pro AV and Broadcast applications.

 

If you need to meet this kind of design challenge—today—you need low-latency video codecs like H.265/HEVC and compression standards such as TICO; you need 4K and 8K video resolution with conversion to and from HD; and you need compatibility and interoperability with all sorts of connectivity standards including 3G/12G SGI and high-speed Ethernet. In short, you need “Any Media Over Any Network” capability and you need all of that without exploding your BOM cost.

 

Where are you going to get it?

 

Well, considering that this is the Xilinx Xcell Daily blog, it’s a good bet that you’re going to hear about the capabilities of at least one Xilinx All Programmable device.

 

Actually, this blog is about a couple of upcoming Webinars being held on May 23 titled “Any Media Over Any Network: KVM Extenders, Switches and KVM-over-IP.” The Webinars are identical but are being held at two different times to accommodate worldwide time zones. In this webinar, Xilinx will show you how you can use the Zynq UltraScale+ MPSoC in KVM applications. The webinar will highlight how Xilinx and its partners’ video-processing and -connectivity IP cores along with the integrated H.265/HEVC codec in the three Zynq UltraScale+ MPSoC EV family members can quickly and easily address new opportunities in the KVM market.

 

 

  • Register here for the free webinar being held at 7am Pacific Daylight Time (UTC-08:00).

 

  • Register here for the free webinar being held at 10am Pacific Daylight Time (UTC-08:00).

 

 

 

 

 

 

Can we talk? About security? You know that it’s a dangerous world out there. For a variety of reasons, bad actors want to steal your data, or steal your customers’ data, or disrupt operations. Your job is not only to design something that works; these days, you also need to design equipment that resists hacking and tampering. PFP Cybersecurity provides IP that helps you create systems that have robust defenses against such exploits.

 

“PFP” stands for “power fingerprinting,” which combines AI and analog power analysis to create high-speed, next-generation cyber protection that can detect tampering in milliseconds instead of days, weeks, or months. It does this by observing the tiny changes to a system’s power consumption during normal operation, learning what’s normal, and then monitoring power consumption to detect an abnormal situation that might signal tampering.

 

The 3-minute video below discusses these aspects of PFP Cybersecurity’s IP and also discusses why the Xilinx Zynq SoC and Zynq UltraScale+ MPSoC are a perfect fit for this security IP. The Zynq device families can all perform high-speed signal processing, have built-in analog conversion circuitry for measuring voltage and current, and can implement high-performance machine-learning algorithms for analyzing power usage.

 

Originally, PFP Cybersecurity designed a monitoring system based on the Zynq SoC for monitoring other systems but, as the video discusses, if the system is already based on a Zynq device, it can monitor itself and return itself to a known good state if tampering is suspected.

 

Here’s the video:

 

 

 

 

 

Note: For more information about PFP Cybersecurity, see “Zynq-based PFP eMonitor brings power-based security monitoring to embedded systems.”

 

Free May 31 Webinar on the $99 ARTY FPGA Dev Board: Make something Awesome!

by Xilinx Employee ‎05-03-2017 02:55 PM - edited ‎05-25-2017 04:38 PM (2,416 Views)

 

ARTY v3 Small.jpgLooking for a low-cost way to get into the most advanced FPGA tools and device families? The $99 ARTY Eval Kit available from Avnet and Digilent is an excellent choice. The ARTY board features a Xilinx Artix-7 A35T FPGA with 256Mbytes of DDR3 SDRAM, Ethernet, four Digilent Pmod ports, and a set of Arduino Shield headers. The Artix-7 A35T FPGA is big enough to implement the 32-bit MicroBlaze soft RISC processor core .

 

Perhaps best of all, the kit includes a voucher for a downloadable copy of the Xilinx Vivado HL Design Edition, device-locked to the Artix-7 A35T FPGA. The full-featured Vivado HL Design Edition includes a lot of great tools including the IP Integrator (IPI), which lets you quickly build FPGA-based designs using graphical representations of IP blocks with intelligent, automated stitching.

 

This edition of Vivado also includes Vivado HLS, so you can experiment with logic synthesis based on design descriptions written in C, C++, or SystemC. The $99 ARTY FPGA Dev Board is really a great way to get started with the industry’s most advanced design tools for FPGA-based design.

 

If you want a closer look at the ARTY Dev Kit, there’s a free, 1-hour Webinar on May 31 you might want to attend. Register here.

 

For more information about the ARTY eval Kit, see “ARTY—the $99 Artix-7 FPGA Dev Board/Eval Kit with Arduino I/O and $3K worth of Vivado software. Wait, What????

 

 

NI PXIe-5172.jpg

 

National Instruments’ (NI’s) PXIe-5172, the newest member of the company’s PXIe-517x family, features four or eight 14-bit, 250Msamples/sec channels. Like the existing members in the family, an on-board Xilinx Kintex-7 FPGA manages the PXIe-5172’s measurement and control features. What’s new is that a portion of the FPGA is now available for user-defined, real-time functions. You can use NI’s LabVIEW FPGA, which integrates with the Xilinx Vivado Design Suite, to define new DSP functions and advanced triggering for the DSO. (You cannot realize such real-time functions at these speeds using software-based microprocessor implementations. You need the speed of programmable hardware.)

 

 

 

Here’s a feature comparison chart of the PXIe-517x DSO family:

 

 

 

 

NI PXIe-517x DSO Family.jpg

 

 

 

 

Here’s a block diagram of a PXIe-517x DSO:

 

 

NI PXIe-517x DSO Family.jpg

 

National Instruments PXIe-517x DSO Block Diagram

 

 

 

Please contact NI directly for more information about the PXIe-517x DSO family.

 

 

Note: Xcell Daily previously discussed PXIe-517x DSO instruments. See “FPGA-based PXIe Digital Oscilloscope is part of National Instruments’ new wave of Software Designed Instruments.”

 

 

By Adam Taylor

 

We need to be able to create more advanced, event-driven applications for Xilinx Zynq UltraScale+ MPSoCs but before that can happen, we need to look at some of the more complex aspects of these devices. In particular, we need to understand how interrupts function within the Zynq MPSoC’s PS (processing system). As would be expected, the Zynq MPSoC’s interrupt structure is slightly more complicated than the Zynq SoC’s PS because the Zynq MPSoC has more processor cores.

 

 

 

Image1.jpg 

 

 

Architectural view of the Interrupt System

 

 

 

The Zynq UltraScale+ MPSoC’s interrupt architecture has four main elements:

 

  1. RPU Generic Interrupt Controller V1 (GIC) – Manages Interrupts within the RPU (real-time processing unit)
  2. APU Generic Interrupt Controller V2 (GIC) – Manages Interrupts within the APU (application processing unit) with support for virtualization
  3. Inter Process Interrupt (IPI) – Enables interrupts between processing units
  4. GIC Proxy – Collates Interrupts acting as a GIC for the PMU (performance monitor unit)

 

At the highest level, we can break these interrupts down into several groupings, which are supplied to each element of the architecture:

 

  • Shared Peripheral Interrupts – 160 interrupt sources. Can be generated by the peripherals within the PS (e.g. IOU peripherals, PCIe etc.) and the PL (programmable logic) within the design
  • Private Peripheral Interrupts – These interrupts are private to a specific processor core
  • Software Generated Interrupts – Interrupts generated by software

 

Shared Peripheral Interrupts can also be sourced by the PL, which is where it gets interesting. We can enable interrupts in either direction between the PS and PL, from within the PS-PL configuration tab of the Zynq MPSoC customization GUI.

 

For the RPU, we are provided an IRQ and an FIQ for each processor core. For fast, low-latency responses, we should use the FIQ. For typical interrupt sources, we should use the IRQ.

 

 

 

Image2.jpg

 

 

RPU IRQ and FIQ Interrupts Enabled for each Core on the MPSoC

 

 

When it comes to the APU, we have two options for connecting the interrupts. The first option is to use the legacy IRQ and FIQ interrupts. There’s one of each for each processor core within the APU. When enabled at the top level of the Zynq MPSoC’s IP Block, we get two 4-bit ports—one for the IRQ and one for the FIQ. Again, the FIQ input provides lowest-latency interrupt response.

 

 

 

Image3.jpg 

 

 

APU IRQ and FIQ Interrupts Enabled for each Core on the MPSoC

 

 

 

The second approach to interrupts is to use interrupt groups. The APU’s GICv2 supports two interrupt groups: zero and one. You can assign interrupts within group zero to the IRQ or FIQ, while those within group 1 can only be assigned to IRQ. This assignment occurs internally within the GICv2. We can also use these interrupt groups when we implement secure environments, with group 0 being used for secure interrupts and group 1 for not-secure.

 

 

 

Image4.jpg 

 

 

APU IRQ Groups Enabled for each Core on the MPSoC

 

 

 

We can use the Inter-Processor Interrupt (IPI) to enable interrupts between the APU, RPU, and PMU. The IPI enables processors within the APU, RPU and PMU to interrupt each other. There IPI also has the ability to interrupt one or more softcore processors implemented within the Zynq MPSoC’s PL.

 

 

 

Image5.jpg

 

 

IPI Interrupt Numbers in the SPI

 

 

 

In addition to the interrupt, the IPI also provides a 32-byte IPI payload buffer for each direction, which can be used for limited communication. The IPI provides eight masters: the APU, RPU0, RPU1, and PMU, along with the LPD and FPD S AXI Interfaces. These masters can be changed from the default allocation by selecting advanced, on the MPSoC Re-Customisation GUI.

 

The final element of the interrupt structure is the Proxy GIC, which collates the shared interrupts connected to the LPU GIC and provides a series of Interrupt Status Registers, used by the PMU.

 

Now that we understand Zynq UltraScale+ MPSoC interrupts a little more, we will look at how we can use these within our designs going forward.

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

The short video below from National Instruments (NI) demonstrates the use of four of NI’s PXIe-5840 VSTs (Vector Signal Transceivers) coupled over high-speed serial links to an NI ATCA-3671 FPGA Module to analyze and process multi-GHz RF signals in real time, all controlled by NI’s LabVIEW software. The result is real-time control and display of the RF analysis. That’s a lot to pack into a 3.5-minute video.

 

 

 

 

 

 

 

NI’s 2nd-generation PXIe-5840 VST is based on a Xilinx Virtex-7 690T FPGA (see “NI launches 2nd-Gen 6.5GHz Vector Signal Transceiver with 5x the instantaneous bandwidth, FPGA programmability”) and the ATCA-3671 incorporates four more Xilinx Virtex-7 690T FPGAs, bringing a total of 14,400 DSP slices to bear on signal-processing tasks. (For information about another interesting use of NI’s ATCA-3671 FPGA Modules, see “DARPA wants you to win $2M in its Grand Spectrum Collaboration Challenge. Yes, lots of FPGAs are involved.”)

 

 

 

Last week, Xcell Daily noted the ZX Spectrum Next project on Kickstarter. (See “Jurassic Computer, Part 2: Recreating the Sinclair ZX Spectrum PC using a Spartan-6 FPGA—on Kickstarter.”) This project replicates an enhanced version of the Sinclair ZX Spectrum microcomputer (the "Speccy"), circa 1982, using a Xilinx Spartan-6 LX9 FPGA to recreate all of the microcomputer’s logic (including the Z80 microprocessor and the video controller), plus the I/O. The project now has pledges worth $521,776, which exceeds the goal by close to $200K and enables the first stretch goal—a bigger FPGA!

 

 

ZX Spectrum Microcomputer.jpg

 

 

The ZX Spectrum Next Microcomputer on Kickstarter

 

 

 

The design will now use a Spartan-6 LX16 FPGA with 60% more programmable logic inside, so that the design team can cram even more features into the design. There are still three weeks in this Kickstarter campaign and four more stretch goals, so if you want one of these, now is probably a good time to pledge.

 

 

Adam Taylor’s MicroZed Chronicles, Part 190: TFTP Boot

by Xilinx Employee on ‎05-01-2017 02:16 PM (1,903 Views)

 

By Adam Taylor

 

 

There are times when we want to configure our system over a network. This maybe because we are commissioning equipment in the lab and want to be able to pick up the latest development builds. Alternatively, we may have multiple systems all connected to a central hub, for example an inflight entertainment system, and configuring it over the network simplifies software updates.

 

In our Zynq SoC and Zynq UltraScale+ MPSoC designs, we can configure systems over the network using TFTP (the Trivial File Transfer Protocol) if we correctly configure U-Boot, the second-stage boot loader. To configure and implement this, we need the following:

 

  • A Linux machine (or a Virtual Machine running Linux) to modify and rebuild U-Boot
  • The latest PetaLinux Release for the Zynq SoC or MPSoC with uImage, Device Tree, and RAM Disk
  • The Xilinx SDK to create the Boot.bin file, which contains the FSBL and SSBL (first- and second-stage boot loaders)
  • The TFTP Server running on a network to serve the uImage, Device Tree, and RAM Disk

 

If this is a fresh Linux installation on a virtual machine, we need to ensure that we have the correct packages loaded to support the build. If not already present, we’ll need to install git, libssl, and ARM’s cross-compilation tools. We can do this by executing the following commands using a terminal (I am using Ubuntu 16.04.2 on my Virtual Machine):

 

sudo apt-get install git

sudo apt-get install libssl-dev

sudo apt-get install gcc-arm-linux-gnueabi

sudo apt-get install device-tree-compiler

sudo apt-get install device-tree-overlay

 

 

Once we have done this, we can clone the Xilinx U-Boot Git Hub onto the Linux installation and make the necessary changes to enable boot over TFTP.  To clone the repository, use the following command:

 

 

git clone git://github.com/Xilinx/u-boot-xlnx.git

 

 

Now that we have the source code for U-Boot, we can make the necessary modifications. But first, we should understand a little about how it works. U-Boot is a second-stage boot loader (SSBL), which is loaded into memory by the FSBL. The SSBL then loads the Linux image, Ram Disk, and device tree into memory, allowing the kernel to start. To do this, U-Boot can read the image, device tree, and RAM disc from several different media including an SD card, QSPI flash memory, NOR or NAND flash memory, or it can download these files using TFTP over Ethernet.

 

To ensure that we can boot from TFTP within U-Boot, we need to update the currently selected boot method to configure over a network. Using this boot method, the system will load the FSBL and U-Boot first from an SD Card or QSPI and will then look for the remaining images on an TFTP server. It will not look for these images on the selected media. This process is simple and requires only one file in U-Boot to be updated, which is Zynq_Common.h. You can find this file in:

 

 

<Cloned Location>/u-boot-xlnx/include/configs

 

 

For this example, I am going to be using the ZedBoard configured to boot from an SD Card. That means I need to change Zynq_common.h stored on the SD Card to look for boot files over the network instead of loading further data from the SD Card. We need to change the original script from:

 

 

Image1.jpg 

 

 

To the following:

 

 

Image2.jpg

 

 

We also need to define the ZedBoard’s IP addresses and the server’s address where the images are located. We do this by adding in the following commands within the CONFIG_EXTRA_ENV_SETTINGS definition:

 

 

 

Image3.jpg

 

Setting the server and ZedBoard IP Address

 

 

 

ipaddr is the IP address selected for the ZedBoard while serverip is the IP address of the TFTP server. This definition also defines the names of the image, device tree, and RAM Disk along with the addresses they will be loaded into. Unless you have a need to change them, I advise you to leave them unaltered.

 

Once these modifications have been completed, the next step is to re-build U-Boot and create a bin file to place on the SD Card. To build U-Boot, ensure that you are within the u-boot-xlnx directory and enter the following commands using a terminal:

 

 

export ARCH=arm

export CROSS_COMPILE=arm-linux-gnueabi-

make zynq_zed_config

make

 

 

These commands will produce an ELF file called u-boot. There will be no file extension. We can use this file with the Zedboard FSBL to create the necessary Boot.Bin file for our SD Card. For the purposes of this demo, I am using the latest release from here. I used the provided FSBL elf with the u-boot elf just created to generate a boot.bin within SDK’s create boot image tool.  

 

 

Image4.jpg 

 

 

Creating the BIN Image using SDK

 

 

 

Once the file has been created, we can copy it to the SD Card and insert the memory card into the ZedBoard. Then when we turn the ZedBoard on, it will attempt to locate the server. Therefore, we need to set up the TFTP server prior to this. I used my laptop and a program called TFTPD to create a server.

 

 

With the server set up and the ZedBoard connected to the network and powered on, the images were downloaded over the network and the Linux images were downloaded and successfully booted as shown below in the images.

 

 

 

Image5.jpg

 

Downloading the uImage over TFTP

 

 

 

 

Image6.jpg 

 

 PetaLinux running on the ZedBoard following the TFTP boot

 

 

 

Image7.jpg 

 

 Log File from the TFTP Server showing transfer of the images

 

 

 

 

I will upload the boot.bin file onto the GitHub for those who are using the ZedBoard. We can of course re-build u-boot for a range of development boards. There is more info on building u-boot here.

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 MicroZed Chronicles hardcopy.jpg

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

MicroZed Chronicles Second Year.jpg

 

 

 

 

In this 40-minute webinar, Xilinx will present a new approach that allows you to unleash the power of the FPGA fabric in Zynq SoCs and Zynq UltraScale+ MPSoCs using hardware-tuned OpenCV libraries, with a familiar C/C++ development environment and readily available hardware development platforms. OpenCV libraries are widely used for algorithm prototyping by many leading technology companies and computer vision researchers. FPGAs can achieve unparalleled compute efficiency on complex algorithms like dense optical flow and stereo vision in only a few watts of power.

 

This Webinar is being held on July 12. Register here.

 

Here’s a fairly new, 4-minute video showing a 1080p60 Dense Optical Flow demo, developed with the Xilinx SDSoC Development Environment in C/C++ using OpenCV libraries:

 

 

 

 

For related information, see Application Note XAPP1167, “Accelerating OpenCV Applications with Zynq-7000 All Programmable SoC using Vivado HLS Video Libraries.”

 

The PCI-SIG Compliance Workshop #101 held earlier this month in Milpitas, CA, dedicated to testing PCIe compliance, was the first interoperability testing for the preliminary PCIe 4.0 spec. The preliminary 4.0 testing included CEM electrical testing and Link/Transaction testing at 16Gtransfers/sec. PLDA went to this workshop with its Gen4SWITCH PCIe 4.0 Board, which is based on based on the company’s PCIe-compliant XpressSWITCH IP and XpressRICH4 controller IP for PCIe 4.0 technology. This configuration supports PCIe 4.0 V0.7. PLDA took a board based on the Xilinx Virtex UltraScale+ VU3P FPGA to the workshop.

 

 

 

PLDA XpressRICH4-AXI PCIe 4 IP.jpg 

 

PLDA XpressRICH4 IP for AXI Block Diagram

 

 

 

With the PLDA Gen4SWITCH configured in x4 and x1 configurations, the board successfully interoperated in the following systems at the PCI-SIG workshop:

 

  • PCIe 4.0 x16
  • PCIe 4.0 x8
  • PCIe 4.0 x1
  • PCIe 4.0 x4 (NVMe SSD configuration)

 

 

When Xcell Daily last looked at PLDA’s Gen4SWITCH PCIe 4.0 Platform Development Kit nearly a year ago, see “PLDA shows working PCIe 4.0 Platform Development Kit operating @ 16G transfers/sec at today’s PCI-SIG Developer’s Conference,” it was based on a Xilinx Virtex UltraScale VU065 FPGA. It now appears that PLDA may have been able to upgrade this board by taking advantage of the footprint compatibility between the Virtex UltraScale VU065 FPGA and the Xilinx Virtex UltraScale+ VU3P FPGA. Here’s a table from the Xilinx UltraScale FPGA Product Selection Guide showing you how the various members of the UltraScale and UltraScale+ FPGA families line up with respect to footprint compatibility:

 

 

 

UltraScale Architecture Migration Table.jpg 

 

 

At the relatively low image resolution permitted by the Xcell Daily layout, you can just make out from the table that the Virtex UltraScale VU065 FPGA and the Xilinx Virtex UltraScale+ VU3P FPGA in the C1517 package have compatible footprints. It actually takes a fair amount of careful engineering to achieve this level of physical compatibility across four different FPGA families (Kintex UltraScale, Virtex UltraScale, Kintex UltraScale+, and Virtex UltraScale+) and multiple devices within these families.

 

 

 

 

 

Jurassic Computer, Part 2: Recreating the Sinclair ZX Spectrum PC using a Spartan-6 FPGA—on Kickstarter

by Xilinx Employee ‎04-26-2017 12:33 PM - edited ‎04-26-2017 12:46 PM (3,647 Views)

 

The ZX Spectrum microcomputer developed by Sinclair Research first appeared in 1982. That’s precisely 35 years ago. According to Wikipedia, Sinclair sold more than five million ZX Spectrum computers over the decade that the “Speccy”’ was offered for sale. (“Speccy” appears to be the pet name that the ZX Spectrum’s many fans gave to this small machine, which spawned a huge ecosystem of software and hardware add-on vendors.) The Speccy was one of the UK’s first mainstream PCs and was based on a 3.5MHz Zilog Z80 microprocessor with either 16 or 48Kbytes of RAM. A 16Kbyte ROM consumed the rest of the Speccy’s 64K address space. Fast forward 35 years to 2017. There’s a new Kickstarter project to recreate the Speccy called the “ZX Spectrum Next.” 

 

Here’s a board photo of a ZX Spectrum motherboard, issue 3B, circa 1983, courtesy of Wikipedia:

 

 

Sinclair ZX Spectrum Motherboard circa 1983.jpg 

 

 

Sinclair 48K ZX Spectrum motherboard, Issue 3B. 1983, Manufactured 1984.

Photo Credit: Bill Bertram

 

 

 

The 40-pin NEC D780C chip on the right is an NEC copy of the Zilog Z80 processor and the NEC D23128C chip to the right of the processor is a 128Kbit masked ROM. The 40-pin Ferranti ULA (uncommitted logic array) on the left side of the board implements the ZX Spectrum’s video, the keyboard interface, and the analog I/O for audiotape mass storage I/O and sound. Sinclair designed many custom ULAs into its products during the 1980s—years before FPGAs became mainstream devices.

 

With 26 days left in the funding campaign, the ZX Spectrum Next project already has $402,936 in pledges which is 125% of goal. So this project is going to be funded (but there are stretch goals yet to be achieved!). The $127 pledge price for the board or $224 for the full machine look like a steal for this piece of recreated computing history.

 

There’s a nice video explaining the project on the Kickstarter page, replicated here:

 

 

 

 

Here’s what under the hood of the machine:

 

  • Processor: Z80 3.5Mhz and 7Mhz modes
  • Memory: 512Kbytes of SRAM (expandable to 1.5Mbytes internally and 2.5Mbytes externally)
  • Video: Hardware sprites, 256-color mode, Timex 8x1 mode, etc.
  • Video Output: RGB, VGA, HDMI
  • Storage: SD Card slot, with DivMMC-compatible protocol
  • Audio: 3x AY-3-8912 audio chips with stereo output + FM sound
  • Joystick: DB9 compatible with Cursor, Kempston and Interface 2 protocols (selectable)
  • PS/2 port: Mouse with Kempston mode emulation and an external keyboard
  • Special: Multiface functionality for memory access, savegames, cheats etc.
  • Tape support: Mic and Ear ports for tape loading and saving
  • Expansion: Original external bus expansion port and accelerator expansion port (for a Raspberry Pi Zero)
  • Accelerator board (optional): GPU / 1Ghz CPU / 512Mbytes of RAM
  • Network (optional): WiFi module
  • Extras: Real Time Clock (optional), internal speaker (optional)

 

 

And here’s a closeup of the ZX Spectrum’s motherboard to make the point for this blog post:

 

 

ZX Spectrum Next Motherboard with Spartan-6 LX9 Closeup.jpg

 

 

That’s right, the Spectrum ZX Next project team is using a Xilinx Spartan-6 LX9 FPGA to recreate essentially all of the above logic including the 8-bit Z80 microprocessor, three GI AY-3-8912 audio chips, the video (RGB, VGA, and HDMI), and all of the ZX Spectrum’s original and the new “Next” I/O ports. The Spartan-6 FPGA isn’t glue; it’s the entire system, except for the SRAM.

 

 

Finally, here’s an additional video demonstrating the ZX Spectrum Next PC’s capabilities and performance:

 

 

 

 

 

 

 

If you’re a Speccy fan, how can you resist?

 

 

 

Adam Taylor’s MicroZed Chronicles, Part 189: Zynq SoC XADC Sampling Modes

by Xilinx Employee ‎04-26-2017 11:04 AM - edited ‎04-26-2017 11:05 AM (1,847 Views)

 

By Adam Taylor

 

In some applications, we wish to maintain the phase relationship between sampled signals. The Zynq SoC’s XADC contains two ADCs, which we can operate simulateneously in lock step to maintain the phase relationship between two sampled signals. To do this, we use the sixteen auxillary inputs with Channels 0-7 assigned to ADC A and channeld 8-15 assigned to ADC B. In simultaneous mode, we can therefore perform conversions on channels 0 to 7 and at the same time, perform conversions on channels 8 to 15.

 

 

Image1.jpg 

 

 

In simultaneous mode, we can also continue to sample the on-chip parameters, however they are not sampled simultaneously. We are unable to perform automatic calibration in simultaneous mode but we can use another mode to perform calibration when needed. This should be sufficent because calibration is generally performed only on power up of the device for most applications.

 

To use the simulatenous mode, we first need a hardware design on Vivado that breaks out the AuX0 and AuX8 channels. On the Zedboard and MicroZed I/O Carrier Cards, these signals are broken out to the AMS connector. This allows me to connect signal sources to the AMS header to stimulate the I/O pin with a signal. For this example, I an using a Digilent Analog Discovery module as signal source.

  

The hardware design within the Zynq for this example appears below:

 

 

Image2.jpg

 

 

Writing the software in SDK for simultaneous mode is very similar to the other modes of operation we have used in the past. The only major difference is that we need to make sure we have configured the simultaneous channels in the sequencer. Once this is done and we have configured the input format we want—bipolar or unipolar, averaging, etc.—we can start the sequencer using the XSM_SEQ_MODE_SIMUL mode definition.

 

When I ran this on the MicroZed set up as shown above and stored 64 samples from both the AUX0 and AUX8 input using input signals that were offset by 180 degrees, I was able to recover the following waveform, which shows the phase relations ship is maintained:

 

 

Image3.jpg 

 

 

If we want, we can also use simultaneous-conversion mode with an external analog multiplexer. All we need to do is configure the design to use the external mux as we did previously. Perhaps the difference this time is that we need to use two external analog multiplexers because we need to be able to select the two channels to convert simultaneously. Also, we need only use three address bits to cover the 0-7 address range, as opposed four address bits that we needed for addressing all sixteen analog inputs when we previously used sequencer mode. We use the lower three address bits of the four available address bits.

 

 

 

Image4.jpg

 

 

 

At this point, the only XADC mode that we have not looked at is independent mode. This mode is like the XADC’s default (safe) mode, however in independent mode ADC A monitors the internal on chip parameters while ADC B samples the external inputs. Independent mode is intended to implement a monitoring mode. As such, the alarms are active so you can use this mode for implementing security and anti-tamper features in your design.

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

MicroZed Chronicles Second Year.jpg 

 

 

 

 

The 1-minute video appearing below shows two 56Gbps, PAM-4 demos from the recent OFC 2017 conference. The first demo shows a CEI-56G-MR (medium-reach, 50cm, chip-to-chip and low-loss backplane) connection between a Xilinx 56Gbps PAM-4 test chip communicating through a QSFP module over a cable to a Credo device. A second PAM-4 demo using CEI-56G-LR (long-reach, 100cm, backplane-style) interconnect shows a Xilinx 56Gbps PAM-4 test chip communicating over a Molex backplane to a Credo device, which is then communicating with a GlobalFoundries device over an FCI backplane, which is then communicating over a TE backplane back to the Xilinx device. This second demo illustrates the growing, multi-company ecosystem supporting PAM-4.

 

 

 

 

For more information about the Xilinx PAM-4 test chip, see “3 Eyes are Better than One for 56Gbps PAM4 Communications: Xilinx silicon goes 56Gbps for future Ethernet,” and “Got 90 seconds to see a 56Gbps demo with an instant 2x upgrade from 28G to 56G backplane? Good!

 

 

Plethora IIoT develops cutting‑edge solutions to Industry 4.0 challenges using machine learning, machine vision, and sensor fusion. In the video below, a Plethora IIoT Oberon system monitors power consumption, temperature, and the angular speed of three positioning servomotors in real time on a large ETXE-TAR Machining Center for predictive maintenance—to spot anomalies with the machine tool and to schedule maintenance before these anomalies become full-blown faults that shut down the production line. (It’s really expensive when that happens.) The ETXE-TAR Machining Center is center-boring engine crankshafts. This bore is the critical link between a car’s engine and the rest of the drive train including the transmission.

 

 

 

Plethora IIoT Oberon System.jpg 

 

 

 

Plethora uses Xilinx Zynq SoCs and Zynq UltraScale+ MPSoCs as the heart of its Oberon system because these devices’ unique combination of software-programmable processors, hardware-programmable FPGA fabric, and programmable I/O allow the company to develop real-time systems that implement sensor fusion, machine vision, and machine learning in one device.

 

Initially, Plethora IIoT’s engineers used the Xilinx Vivado Design Suite to develop their Zynq-based designs. Then they discovered Vivado HLS, which allows you to take algorithms in C, C++, or SystemC directly to the FPGA fabric using hardware compilation. The engineers’ first reaction to Vivado HLS: “Is this real or what?” They discovered that it was real. Then they tried the SDSoC Development Environment with its system-level profiling, automated software acceleration using programmable logic, automated system connectivity generation, and libraries to speed programming. As they say in the video, “You just have to program it and there you go.”

 

Here’s the video:

 

 

 

 

Plethora IIoT is showcasing its Oberon system in the Industrial Internet Consortium (IIC) Pavilion during the Hannover Messe Show being held this week. Several other demos in the IIC Pavilion are also based on Zynq All Programmable devices.

 

 

Looking for a relatively painless overview of the current state of the art for high-speed Ethernet used in data centers and for telecom? You should take a look at this just-posted, 30-minute video of a panel discussion at OFC2017 titled “400GE from Hype to Reality.” The panel members included:

 

  • Mark Gustlin, Principal System Architect at Xilinx (the moderator)
  • Brad Booth, Microsoft Azure Networking
  • David Ofelt, Juniper Networks

 

Gustlin starts by discussing the history of 400GbE’s development, starting with a study group organized in 2013. Today, the 400GbE spec is at draft 3.1 and the plan is to produce a final standard by December 2017.

 

Booth answers a very simple question in his talk: “”Yes, we ill” use 400GbE in the data center. He then proceeds to give a fairly detailed description of the data centers and networking used to create Microsoft’s Azure cloud-computing platform.

 

Ofelt describes the genesis of the 400GbE standard. Prior to 400G, says Ofelt, system vendors worked with end users (primarily telecom companies) to develop faster Ethernet standards. Once a standard appeared, ther would be a deployment ramp. Although 400GbE development started that way, the people building hyperscale data centers sort of took over and they want to deploy 400GbE at scale, ASAP.

 

Don’t be fooled by the title of this panel. There’s plenty of discussion about 25GbE through 100GbE and 200GbE as well, so if you’re needing a quick update on high-speed Ethernet’s status, this 30-minute video is for you.

 

 

 

 

 

Digi-Key stocking Zynq-based Red Pitaya Open Instrumentation STEMLab starter kits and accessories

by Xilinx Employee ‎04-24-2017 03:07 PM - edited ‎04-24-2017 04:22 PM (1,273 Views)

 

I’ve written about the Zynq-based Red Pitaya several times in Xcell Daily. (See below.) Red Pitaya is a single-board, open instrumentation platform based on the Xilinx Zynq SoC, which combines a dual-core ARM Cortex-A9 MPCore processor with a heavy-duty set of peripherals and a chunk of Xilinx 7 series programmable logic. Red Pitaya packages its programmable instrumentation board with probes, power supply, and an enclosure and calls it the STEMlab. I’ve just discovered the STEMlab page on the Digi-Key site with inventory levels, so if you want to get into programmable instrumentation in a hurry, this is a good place to start.

 

The page lists three STEMlab starter kits:

 

 

 

27901 Red Pitaya Stemlab with accessories.jpg 

 

 

Red Pitaya 27901 STEMlab kit with scope and logic probes

 

 

 

For more articles about the Zynq-based Red Pitaya, see:

 

 

 

 

 

 

Adam Taylor’s MicroZed Chronicles, Part 188: Pmods!

by Xilinx Employee ‎04-24-2017 10:34 AM - edited ‎04-24-2017 10:43 AM (2,780 Views)

 

By Adam Taylor

 

Over the length of this series, we have looked at several different development boards. One thing that is common to many of these boards: they provide one or more Pmod (Peripheral module) connections that allow us to connect small peripherals to our boards. Pmods expand our prototype designs to create final systems. We have not looked in much detail at Pmods but they are an important aspect of many developments. As such, it would be remiss for me not to address them.

The Pmod standard itself was developed by Digilent and is an open-source de facto standard to ensure wide adoption of this very useful interface. There’s a wide range of available Pmods from DA/AD convertors to GPS receivers and OLED displays.

 

Over the years, we have looked at several Zynq-based boards with at least one Pmod port. In some cases, these boards provide Pmod ports that are connected to either the Zynq SoC’s PL (programmable logic), the PS (processing system), or both. If a PS connection is used, we can use the Zynq SoC’s MIO to provide the interface. If the Pmod connection is to the PL, then we need to create our own interface to the Pmod device. Regardless of whether we use the PL or the PS, we will need a software driver to interface with it.

 

 

Image1.jpg

 

Various Zynq-based dev boards and their Pmod connections

 

 

 

That comment may initially bring you to the thought that we need to develop our own Pmod drivers from scratch. This of course increases the time it takes to develop the application. For many Pmods, this is not the case. There is wide range of existing drivers we can use for both the PL and PS we can use within our designs.

 

The first thing we need to do it download the Digilent Vivado library. This library contains several Pmod drivers and DVI sinks and sources plus other very useful IP blocks that can accelerate our design.

 

Once you have downloaded this library, examine the file structure. You will notice multiple folders under the Pmods folder. Each of these folders is named for an available Pmod (e.g. Pmod_AD2 which is ADC). Within each of these drivers, you will see files structures as shown below:

 

 

Image2.jpg 

 

 

Within this structure, the folders contain:

 

  • Drivers – C source files for use in SDK. These files provide drivers and, in many cases, example applications.
  • Src – The HDL source for the IP module.
  • Utils – Contains the board interfaces, e.g. the outputs.
  • Xgui – Contains TCL files used to instantiate the IP modules.

 

The next step, if we wish to use these IP modules, is to include the directory as a repository within our Vivado design. We do this by selecting the project settings within our project. We can add a new repository pointing to the Digilent Vivado library we have just downloaded using the IP settings repository manager tab:

 

 

 

Image3.jpg

 

 

 

Once this is done, we should be able to see the Pmod IP cores within the Vivado IP Catalog. We can then use these IP cores in within our design in the same way we use all other IP.

 

 

 

Image4.jpg

 

 

 

Once we have created our block diagram in Vivado, we can customize the Pmod IP blocks and select the Pmod Port they are connected to—assuming the board definition for the development board we are using supports that.

 

 

Image5.jpg

 

 

 

 

In the case below, which targets the new Digilent ARTY Z7 board, the AD2 Pmod is being connected to Pmod Port B:

 

 

Image6.jpg

 

 

 

If we are unable to find a driver for the Pmod we want to use, we can use the Pmod Bridge driver, which will enable us to create an interface to the desired Pmod with the correct pinout.

 

When it comes to software, all we need to do is import the files from the drivers/<Pmod_name>/src directory to our SDK project. Adding these files will provide a range of drives that we can use to interface with the Pmod PL instantiation and talk to the connected Pmod. If there is example code available, we will find this under the drivers/<Pmod name>/examples directory. When I ran this example code for the PmodAD2 it worked as expected:

 

 

Image7.jpg

 

 

This enables us to get our designs up and running even faster.

 

 

My code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg

 

 

Like any semiconductor device, designing with Xilinx All Programmable devices means dealing with their power-supply requirements—and like any FPGA or SoC, Xilinx devices do have their fair share of requirements in the power-supply department. They require several supply voltages, more or less depending on your I/O requirements, and they need to have these voltages ramped up and down in a certain sequence and with specific ramp rates if they’re to operate properly. On top of that, power-supply designs are board-specific—different for every unique pcb. Dealing with all of these supply specs is a challenging engineering problem, just due to the number of requirements, so you might like some help tackling it.

 

Here’s some help.

 

Infineon demonstrated a reference power supply design for Xilinx Zynq UltraScale+ MPSoCs based on its IRPS5401 Flexible Power Management Unit at APEC (the Applied Power Electronics Conference) last month. The reference design employs two IRPS5401 devices to manage and supply ten different power supplies. Here’s a block diagram of the reference design:

 

 

Infineon Zynq UltraScale Plus MPSoC Power Supply Reference Design.jpg

 

Infineon Power Supply Reference Design for the Zynq UltraScale+ MPSoC

 

 

This design is used on the Avnet UltraZed SOM, so you know that it’s already proven. (For more information about the Avnet UltraZed SOM, see “Avnet UltraZed-EG SOM based on 16nm Zynq UltraScale+ MPSoC: $599” and “Look! Up in the sky! Is it a board? Is it a kit? It’s… UltraZed! The Zynq UltraScale+ MPSoC Starter Kit from Avnet.”)

 

Now the UltraZed SOM measures only 2x3.5 inches (50.8x76.2mm) and the power supply consumes only a small fraction of the space on the SOM, so you know that the Infineon power supply design must be compact.

 

It needs to be.

 

Here’s a photo of the UltraZed SOM with the power supply section outlined in yellow:

 

 

 

Infineon Zynq UltraScale Plus MPSoC Power Supply.jpg 

 

Infineon Power Supply Design on the Avnet UltraZed SOM (outlined in yellow)

 

 

 

Even though this power supply design is clearly quite compact, the high integration level inside of Infineon’s IRPS5401 Flexible Power Management Unit means that you don’t need additional components to handle the power-supply sequencing or ramp rates. The IRPS5401s handle that for you.

 

However, every Zynq UltraScale+ MPSoC pcb is different because every pcb presents different loads, capacitances, and inductances to the power supply. So you will need to tailor the sequencing and ramp times for each board design. Sounds like a major pain, right?

 

Well, Infineon felt your pain and offers an antidote. It’s an Infineon software app called the PowIRCenter and it’s designed to reduce the time needed to develop the complex supply-voltage sequencing and ramp times to perhaps 15 minutes worth of work—which is how long it took, apparently, for an Avnet design engineer to set the timings for the UltraZed SOM.

 

Here’s a 4-minute video where Infineon’s Senior Product Marketing Manager Tony Ochoa walks you through the highlights of this power supply design and the company’s PowIRCenter software:

 

 

 

 

 

 

Just remember, the Infineon IRPS5401 Flexible Power Management Unit isn’t dedicated to the Zynq UltraScale+ MPSoC. You can use it to design power supplies for the full Xilinx device range.

 

 

 

Note: For more information about the IRPS5401 Flexible Power Management Unit, please contact Infineon directly.

 

intoPIX announces IP core support for 8K TICO video compression with <1msec end-to-end latency

by Xilinx Employee ‎04-21-2017 02:01 PM - edited ‎04-21-2017 02:16 PM (1,142 Views)

 

Today, intoPIX announced that it’s lightweight TICO video-compression IP cores for Xilinx FPGAs can now support frame resolutions and rates to 8K60p as well as the previously supported HD and 4K resolutions. Currently, the compression cores support 10-bit, 4:2:2 workflows but intoPIX also disclosed in a published table (see below) that a future release of the IP core will support 4:4:4 color sampling. The TICO compression standard simplifies the management of live and broadcast video streams over existing video network infrastructures based on SDI and Ethernet by reducing the bandwidth requirements of high-definition and ultra-high-definition video at compression ratios as large as 5:1 (visually lossless at ratios to 4:1). TICO compression supports live video streams through its low latency—less than 1msec end-to-end.

 

Conveniently, intoPIX has published a comprehensive chart showing its various TICO compression IP cores and the Xilinx FPGAs that can support them. Here’s the intoPIX chart:

 

 

intoPIX TICO Compression Table for Xilinx FPGAs.jpg 

 

 

Note that the most cost-effective Xilinx FPGAs including the Spartan-6 and Artix-7 families support TICO compression at HD and even some UHD/4K video formats while the Kintex-7, Virtex-7, and UltraScale device families support all video formats through 8K.

 

Please contact intoPIX for more information about these IP cores.

 

 

 

Mentor Embedded is now supporting the Android OS (plus Linux and Nucleus) on Zynq UltraScale+ MPSoCs. You learn more in a free Webinar titled “Android in Safety Critical Designs” that’s being held on May 3 and 4. The Webinar will discuss how to use Android in safety-critical designs on the Xilinx Zynq UltraScale+ MPSoC. Register for the Webinars here.

 

Avnet’s MiniZed based on single-core Xilinx Zynq Z-7007S is “coming soon” to a Web page near you

by Xilinx Employee ‎04-20-2017 11:14 AM - edited ‎04-25-2017 04:20 PM (1,933 Views)

 

I got a heads up on a new, low-end dev board called the “MiniZed” coming soon from Avnet and found out there’s a pre-announcement Web page for the board. Avnet’s MiniZed is based on one of the new Zynq Z-7000S family members with one ARM Cortex-A9 processor. It will include both WiFi and Bluetooth RF transceivers and, according to the MiniZed Web page, will cost less than $100!

 

Here’s the link to the MiniZed Web page and here’s a slightly fuzzy MiniZed board photo:

 

 

Avnet MiniZed 2.jpg
 

 

Avnet MiniZed (coming soon, for less than $100)

 

 

If I’m not mistaken, that’s an Arduino header footprint and two Digilent Pmod headers on the board, which means that a lot of pretty cool shields and Pmods are already available for this board (minus the software drivers, at least for the Arduino shields).

 

 

I know you’ll want more information about the MiniZed board but I simply don’t have it. So please contact Avnet for more information or register for the info on the MiniZed Web page.

 

The Vivado Design Suite HLx Editions 2017.1 release is now available for download. The Vivado HL Design Edition and HL System Edition now support partial reconfiguration. Partial reconfiguration is available for the Vivado WebPACK Edition at a reduced price.

 

Xilinx partial reconfiguration technology allows you to swap FPGA-based functions in and out of your design on the fly, eliminating the need to fully reconfigure the FPGA and re-establish links. Partial reconfigurability gives you the ability to update feature sets in deployed systems, fix bugs, and migrate to new standards while critical functions remain active. This capability dramatically expands the flexible use of Xilinx All Programmable designs in a truly wide variety of applications.

 

For example, a detailed article published on the WeChat Web site by Siglent about the company’s new, entry-level SDS1000X-E DSO family—based on a Xilinx Zynq Z-7020 SoC—suggests that the new DSO family’s system design employs the Zynq SoC’s partial-reconfiguration capability to further reduce the parts count and the board footprint: “The PL section has 220 DSP slices and 4.9 Mb Block RAM; coupled with high throughput between the PS and PL data interfaces, we have the flexibility to configure different hardware resources for different digital signal processing.” (See “Siglent 200MHz, 1Gsample/sec SDS1000X-E Entry-Level DSO family with 14M sample points is based on Zynq SoC.”)

 

 

 

Siglent SDS1202X-E DSO.jpg
 

 

 

Siglent’s new, entry-level SDS1000X-E DSO family is based on a Xilinx Zynq Z-7020 SoC

 

 

 

In addition, the Vivado 2017.1 release includes support for the Xilinx Spartan-7 7S50 FPGA (Vivado WebPACK support will be in a later release). The Spartan-7 FPGAs are the lowest-cost devices in the 28nm Xilinx 7 series and they’re optimized for low, low cost per I/O while delivering terrific performance/watt. Compared to Xilinx Spartan-6 FPGAs, Spartan-7 FPGAs run at half the power consumption (for comparable designs) and with 30% more operating frequency. The Spartan-7 S50 FPGA is a mid-sized family member with 52,160 logic cells, 2.7Mbits of BRAM, 120 DSP slices, and 250 single-ended I/O pins. It’s a very capable FPGA. (For more information about the Spartan-7 FPGA family, see “Today, there are six new FPGAs in the Spartan-7 device family. Want to meet them?” and “Hot (and Cold) Stuff: New Spartan-7 1Q Commercial-Grade FPGAs go from -40 to +125°C!”)

 

 

Spartan-7 Family Table with 1Q devices.jpg 

 

Spartan-7 FPGA Family Table

 

 

 

 

 

As of today, Amazon Web Services (AWS) has made the FPGA-accelerated Amazon EC2 F1 compute instance generally available to all AWS customers. (See the new AWS video below and this Amazon blog post.) The Amazon EC2 F1 compute instance allows you to create custom hardware accelerators for your application using cloud-based server hardware that incorporates multiple Xilinx Virtex UltraScale+ VU9P FPGAs. Each Amazon EC2 F1 compute instance can include as many as eight FPGAs, so you can develop extremely large and capable, custom compute engines with this technology. According to the Amazon video, use of the FPGA-accelerated F1 instance can accelerate applications in diverse fields such as genomics research, financial analysis, video processing (in addition to security/cryptography and machine learning) by as much as 30x over general-purpose CPUs.

 

Access through Amazon’s FPGA Developer AMI (an Amazon Machine Image within the Amazon Elastic Compute Cloud (EC2)) and the AWS Hardware Developer Kit (HDK) on Github. Once your FPGA-accelerated design is complete, you can register it as an Amazon FPGA Image (AFI), and deploy it to your F1 instance in just a few clicks. You can reuse and deploy your AFIs as many times, and across as many F1 instances as you like and you can list it in the AWS Marketplace.

 

The Amazon EC2 F1 compute instance reduces the time a cost needed to develop secure, FPGA-accelerated applications in the cloud and has now made access quite easy through general availability.

 

Here’s the new AWS video with the general-availability announcement:

 

 

 

 

 

The Amazon blog post announcing general availability lists several companies already using the Amazon EC2 F1 instance including:

 

  • Edico Genome: DRAGEN Bio-IP Platform
  • Ryft: Ryft Cloud accelerator for data analytics
  • Reconfigure.io: cloud-based, Go FPGA programming language
  • NGCodec: RealityCodec video encoder

 

 

 

 

 

 

AT&T recently announced the development of a one-of-a-kind 5G channel sounder—internally dubbed the “Porcupine” for obvious reasons—that can characterize a 5G transmission channel using 6000 angle-of-arrival measurements in 150msec, down from 15 minutes using conventional pan/tilt units. These channel measurements capture how wireless signals are affected in a given environment. For instance, channel measurements can show how objects such as trees, buildings, cars, and even people reflect or block 5G signals. The Porcupine allows measurement of 5G mmWave frequencies via drive testing, something that was simply not possible using other mmWave channel sounders. Engineers at AT&T used the mmWave Transceiver System and LabVIEW System Design Software including LabVIEW FPGA from National Instruments (NI) to develop this system.

 

 

 

AT&T Porcupine channel sounder.jpg

 

 

AT&T “Porcupine” 5G Channel Sounder

 

 

 

NI designed the mmWave Transceiver System as a modular, reconfigurable SDR platform for 5G R&D projects. This prototyping platform offers 2GHz of real-time bandwidth for evaluating mmWave transmission systems using NI’s modular transmit and receive radio heads in conjunction with the transceiver system’s modular PXIe processing chassis.

 

The key to this system’s modularity is NI’s 18-slot PXIe-1085 chassis, which accepts a long list of NI processing modules as well as ADC, DAC, and RF transceiver modules. NI’s mmWave Transceiver System uses the NI PXIe-7902 FPGA module—based on a Xilinx Virtex-7 485T—for real-time processing.

 

 

NI PXIe-7902 FPGA Module.jpg

 

 

NI PXIe-7902 FPGA module based on a Xilinx Virtex-7 485T

 

 

NI’s mmWave Transceiver System maps different mmWave processing tasks to multiple FPGAs in a software-configurable manner using the company’s LabVIEW System Design Software. NI’s LabVIEW relies on the Xilinx Vivado Design Suite for compiling the FPGA configurations. The FPGAs distributed in the NI mmWave Transceiver System provide the flexible, high-performance, low-latency processing required to quickly build and evaluate prototype 5G radio transceiver systems in the mmWave band—like AT&T’s Porcupine.

 

 

 

By Adam Taylor

 

Having introduced the Real-Time Clock (RTC) in the Xilinx Zynq UltraScale+ MPSoC, the next step is to write some simple software to set the time, get the time, and calibrate the RTC. Doing this is straightforward and aligns with how we use other peripherals in the Zynq MPSoC and Zynq-7000 SoC.

 

 

Image1.jpg

 

 

Like all Zynq peripherals, the first thing we need to do with the RTC is look up the configuration and then use it to initialize the peripheral device. Once we have the RTC initialized, we can configure and use it. We can use the functions provided in the xrtcpsu.h header file to initialize and use the RTC. All we need to do is correctly set up the driver instance and include the xrtcpsu.h header file. If you want to examine the file’s contents, you will find them within the generated BSP for the MPSoC. Under this directory, you will also find all the other header files needed for your design. Which files are available depends upon how you configured the MPSoC in Vivado (e.g. what peripherals are present in the design).

 

We need to use a driver instance to use the RTC within our software application. For the RTC, that’s XRtcPsu, which defines the essential information such as the device configuration, oscillator frequency, and calibration values. This instance is used in all interactions with the RTC using the functions in the xrtcpsu.h header file.

 

 

Image2.jpg

 

As I explained last week, the RTC counts the number of seconds, so we will need to convert to and from values in units of seconds. The xrtcpsu.h header file contains several functions to support these conversions. To support this, we’ll use a C structure to hold the real date prior to conversion and loading into the RTC or to hold the resultant conversion date following conversion from the seconds counter.

 

 

Image3.jpg

 

 

 

We can use the following functions to set or read the RTC (which I did in the code example available here):

 

  • XRtcPsu_GetCurrentTime – Gets the current time in seconds from the RTC
  • XRtcPsu_SecToDateTime – Converts the time in seconds to the date format contained within XRtcPSU_DT
  • XRtcPsu_DateTimeToSec – Converts the date in a format of XRtcPsu_DT into seconds
  • XRtcPsu_SetTime – Sets the RTC to the current time in seconds

 

By convention, the functions used to set the RTC seconds counter is based on a time epoch from 1/1/2000. If we are going to be using internet time, which is often based on a 1/1/1970 epoch by a completely different convention, we will need to convert from one format to another. The functions provided for the RTC only support years between 2000 and 2099.

 

In the example code, we’ve used these functions to report the last set time before allowing the user to enter the time over using a UART. Once the time has been set, the RTC is calibrated before being re-initialized. The RTC is then read once a second and the values output over the UART giving the image shown at the top of this blog. This output will continue until the MPSoC is powered down.

 

To really exploit the capabilities provided by the RTC, we need to enable the interrupts. I will look at RTC interrupts in the Zynq MPSoC in the next issue of the MicroZed Chronicles, UltraZed Edition. Once we understand how interrupts work, we can look at the RTC alarms. I will also fit a battery to the UltraZed board to test its operation on battery power.

 

The register map with the RTC register details can be found here.

 

 

My code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

MicroZed Chronicles Second Year.jpg 

 

Mega65 Logo.jpgThe MEGA65 is an open-source microcomputer modeled on the incredibly successful Commodore 64/65 circa 1982-1990. Ye olde Commodore 64 (C64)—introduced in 1982—was based on an 8-bit MOS Technology 6510 microprocessor, which was derived from the very popular 6502 processor that powered the Apple II, Atari 400/800, and many other 8-bit machines in the 1980s. The 6510 processor added an 8-bit parallel I/O port to the 6502, which no doubt dropped the microcomputer’s BOM cost a buck or two. According to Wikipedia, “The 6510 was only widely used in the Commodore 64 home computer and its variants.” Also according to Wikipedia, “For a substantial period (1983–1986), the C64 had between 30% and 40% share of the US market and two million units sold per year, outselling the IBM PC compatibles, Apple Inc. computers, and the Atari 8-bit family of computers.”

 

Now that is indeed a worthy computer to serve as a “Jurassic Park” candidate and therefore, the non-profit MEGA (Museum of Electronic Games & Art), “dedicated to the preservation of our digital heritage,” is supervising the physical recreation of the Commodore 64 microcomputer (mega65.org). It’s called the MEGA65 and it’s software-compatible with the original Commodore 64, only faster. (The 6510 processor emulation in the MEGA65 runs at 48MHz compared to the original MOS Technology 6510’s ~1MHz clock rate.) MEGA65 hardware designs and software are open-source (LGPL).

 

How do you go about recreating the hardware of a machine that’s been gone for 25 years? Fortunately, it’s a lot easier than extracting DNA from the stomach contents of ancient mosquitos trapped in amber. Considering that this blog is appearing in Xcell Daily on the Xilinx Web site, the answer’s pretty obvious: you use an FPGA. And that’s exactly what’s happening.

 

A few days ago, the MEGA65 team celebrated initial bringup of the MEGA65 pcb. You can read about the bringup efforts here and here is a photo of the pcb:

 

 

MEGA65 pcb.jpg 

 

The first MEGA65 PCB

 

 

 

The MEGA65 pcb is designed to fit into the existing Commodore 65 plastic case. (The Commodore 65 was prototyped but not put into production.)

 

Sort of gives a new meaning to “single-chip microcomputer,” does it not. That big chip in the middle of the board is an Xilinx Artix-7 A200T. It implements the Commodore 64’s entire motherboard in one programmable logic device. Yes, that includes the RAM. The Artix-7 A200T FPGA has 13.14Mbits of on-chip block RAM. That’s more than 1.5Mbytes of RAM, or 25x more RAM than the original Commodore 64 motherboard, which used eight 4164 64Kbit, 150nsec DRAMs for RAM storage. The video’s a bit improved too, from 160x200 pixels, with a maximum of four colors per 4x8 character block, or 320x200 pixels, with a maximum of two colors per 8x8 character block, to a more modern 1920x1200 pixels with 12-bit color (23-bit color is planned). Funny what 35 years of semiconductor evolution can produce.

 

What’s the project’s progress status? Here’s a snapshot from the MEGA65 site:

 

 

 

MEGA65 Progress.jpg

 

 

MEGA65 Project Status

 

 

 

And here’s a video of the MEGA65 in action:


 

 

 

 

 

Remember, what you see and hear is running on a Xilinx Artix-7 A200T, configured to emulate an entire Commodore 64 microcomputer. Most of the code in this video was written in the Jurassic period of microcomputer development. If you’re of a certain age, these old programs should bring a chuckle or perhaps just a smile to your lips.

 

 

Note: You’ll find a MEGA65 project log by Antti Lukats here.

 

 

 

 

 

 

Basic problem: When you’re fighting aliens to save the galaxy wearing your VR headset, having a wired tether to pump the video to your headset is really going to crimp your style. Spin around to blast that battle droid sneaking up on you from behind is just as likely to garrote you as save your neck. What to do? How will you successfully save the galaxy?

 

Well, NGCodec and Celeno Communications have a demo for you in the NGCodec booth (N2635SP-A) at NAB in the Las Vegas Convention Center next week. Combine NGCodec’s low-latency H.265/HEVC “RealityCodec” video coder/decoder IP with Celeno’s 5GHz 802.11ac WiFi connection and you have a high-definition (2160x1200), high-frame-rate (90 frames/sec) wireless video connection over a 15Mbps wireless channel. This demo uses a 250:1 video compression setting to fit the video into the 15Mbps channel.

 

In the demo, a RealityCodec hardware instance in a Xilinx Virtex UltraScale+ VU9P FPGA on a PCIe board plugged into a PC running Windows 10 compresses generated video in real time. The PC sends the compressed, 15Mbps video stream to a Celeno 802.11ac WiFi radio, which transmits the video over a standard 5GHz 802.11ac WiFi connection. Another Celeno WiFi radio receives the compressed video stream and sends it to a second RealityCodec for decoding. The decoder hardware is instantiated in a relatively small Xilinx Kintex-7 325T FPGA. The decoded video stream feeding the VR goggles requires 6Gbps of bandwidth, which is why you want to compress it for RF transmission.

 

Of course, if you’re going to polish off the aliens quickly, you really need that low compression latency. Otherwise, you’re dead meat and the galaxy’s lost. A bad day all around.

 

Here’s a block diagram of the NAB demo:

 

 

NGCodec Wireless VR Demo for NAB.jpg 

 

 

 

 

 

You are never going to get past a certain performance barrier by compiling C for a software-programmable processor. At some point, you need hardware acceleration.

 

As an analogy: You can soup up a car all you want; it’ll never be an airplane.

 

Sure, you can bump the processor clock rate. You can add processor cores and distribute the tasks. Both of these approaches increase power consumption, so you’ll need a bigger and more expensive power supply; they increase heat generation, which means you will need better cooling and probably a bigger heat sink or a fan (or another fan); and all of these things increase BOM costs.

 

Are you sure you want to take that path? Really?

 

OK, you say. This blog’s from an FPGA company (actually, Xilinx is an “All Programmable” company), so you’ll no doubt counsel me to use an FPGA to accelerate these tasks and I don’t want to code in Verilog or VHDL, thank you very much.

 

Not a problem. You don’t need to.

 

You can get the benefit of hardware acceleration while coding in C or C++ using the Xilinx SDSoC development environment. SDSoC produces compiled software automatically coupled to hardware accelerators and all generated directly from your high-level C or C++ code.

 

That’s the subject of a new Chalk Talk video just posted on the eejournal.com Web site. Here’s one image from the talk:

 

 

SDSoC Acceleration Results.jpg

 

 

This image shows three complex embedded tasks and the improvements achieved with hardware acceleration:

 

 

  • 2-camera, 3D disparity mapping – 292x speed improvement

 

  • Sobel filter video processing – 30x speed improvement

 

  • Binary neural network – 1000x speed improvement

 

 

A beefier software processor or multiple processor cores will not get you 1000x more performance—or even 30x—no matter how you tweak your HLL code, and software coders will sweat bullets just to get a few percentage points of improvement. For such big performance leaps, you need hardware.

 

Here’s the 14-minute Chalk Talk video:

 

 

 

 

 

What do you do if you want to build a low-cost state-of-the-art, experimental SDR (software-defined radio) that’s compatible with GNURadio—the open-source development toolkit and ecosystem of choice for serious SDR research? You might want to do what Lukas Lao Beyer did. Start with the incredibly flexible, full-duplex Analog Devices AD9364 1x1 Agile RF Transceiver IC and then give it all the processing power it might need with an Artix-7 A50T FPGA. Connect these two devices on a meticulously laid out circuit board taking all RF-design rules into account and then write the appropriate drivers to fit into the GNURadio ecosystem.

 

Sounds like a lot of work, doesn’t it? It’s taken Lukas two years and four major design revisions to get to this point.

 

Well, you can circumvent all that work and get to the SDR research by signing up for a copy of Lukas’ FreeSRP board on the Crowd Supply crowd-funding site. The cost for one FreeSRP board and the required USB 3.0 cable is $420.

 

 

FreeSRP Board.jpg

 

Lukas Lao Beyer’s FreeSRP SDR board based on a Xilinx Artix-7 A50T FPGA

 

 

 

With 32 days left in the Crowd Supply funding campaign period, the project has raised pledges of a little more than $12,000. That’s about 16% of the way towards the goal.

 

There are a lot of well-known SDR boards available, so conveniently, the FreeSRP Crowd Supply page provides a comparison chart:

 

 

FreeSRP Comparison Chart.jpg 

 

 

If you really want to build your own, the documentation page is here. But if you want to start working with SDR, sign up and take delivery of a FreeSRP board this summer.

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.