UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

 

I did not go to Embedded World in Nuremberg this week but apparently SemiWiki’s Bernard Murphy was there and he’s published his observations about three Zynq-based reference designs that he saw running in Aldec’s booth on the company’s Zynq-based TySOM embedded dev and prototyping boards.

 

 

Aldec TySOM-2 Prototyping Board.jpg

 

Aldec TySOM-2 Embedded Prototyping Board

 

 

 

Murphy published this article titled “Aldec Swings for the Fences” on SemiWiki and wrote:

 

 

“At the show, Aldec provided insight into using the solution to model the ARM core running in QEMU, together with a MIPI CSI-2 solution running in the FPGA. But Aldec didn’t stop there. They also showed off three reference designs designed using this flow and built on their TySOM boards.

 

“The first reference design targets multi-camera surround view for ADAS (automotive – advanced driver assistance systems). Camera inputs come from four First Sensor Blue Eagle systems, which must be processed simultaneously in real-time. A lot of this is handled in software running on the Zynq ARM cores but the computationally-intensive work, including edge detection, colorspace conversion and frame-merging, is handled in the FPGA. ADAS is one of the hottest areas in the market and likely to get hotter since Intel just acquired Mobileye.

 

“The next reference design targets IoT gateways – also hot. Cloud interface, through protocols like MQTT, is handled by the processors. The gateway supports connection to edge devices using wireless and wired protocols including Bluetooth, ZigBee, Wi-Fi and USB.

 

“Face detection for building security, device access and identifying evil-doers is also growing fast. The third reference design is targeted at this application, using similar capabilities to those on the ADAS board, but here managing real-time streaming video as 1280x720 at 30 frames per second, from an HDR-CMOS image sensor.”

 

The article contains a photo of the Aldec TySOM-2 Embedded Prototyping Board, which is based on a Xilinx Zynq Z-7045 SoC. According to Murphy, Aldec developed the reference designs using its own and other design tools including the Aldec Riviera-PRO simulator and QEMU. (For more information about the Zynq-specific QEMU processor emulator, see “The Xilinx version of QEMU handles ARM Cortex-A53, Cortex-R5, Cortex-A9, and MicroBlaze.”)

 

Then Murphy wrote this:

 

“So yes, Aldec put together a solution combining their simulator with QEMU emulation and perhaps that wouldn’t justify a technical paper in DVCon. But business-wise they look like they are starting on a much bigger path. They’re enabling FPGA-based system prototype and build in some of the hottest areas in systems today and they make these solutions affordable for design teams with much more constrained budgets than are available to the leaders in these fields.”

 

 

EETimes’ Junko Yoshida with some expert help analyzes this week’s Xilinx reVISION announcement

by Xilinx Employee ‎03-15-2017 01:25 PM - edited ‎03-22-2017 07:20 AM (585 Views)

 

Image3.jpgThis week, EETimes’ Junko Yoshida published an article titled “Xilinx AI Engine Steers New Course” that gathers some comments from industry experts and from Xilinx with respect to Monday’s reVISION stack announcement. To recap, the Xilinx reVISION stack is a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference.

 

(See “Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge.”)

 

As Xilinx Senior Vice President of Corporate Strategy Steve Glaser tells Yoshida, “Xilinx designed the stack to ‘enable a much broader set of software and systems engineers, with little or no hardware design expertise to develop, intelligent vision guided systems easier and faster.’

 

Yoshida continues:

 

While talking to customers who have already begun developing machine-learning technologies, Xilinx identified ‘8 bit and below fixed point precision’ as the key to significantly improve efficiency in machine-learning inference systems.

 

 

Yoshida also interviewed Karl Freund, Senior Analyst for HPC and Deep Learning at Moor Insights & Strategy, who said:

 

Artificial Intelligence remains in its infancy, and rapid change is the only constant.” In this circumstance, Xilinx seeks “to ease the programming burden to enable designers to accelerate their applications as they experiment and deploy the best solutions as rapidly as possible in a highly competitive industry.

 

 

She also quotes Loring Wirbel, a Senior Analyst at The Linley group, who said:

 

What’s interesting in Xilinx's software offering, [is that] this builds upon the original stack for cloud-based unsupervised inference, Reconfigurable Acceleration Stack, and expands inference capabilities to the network edge and embedded applications. One might say they took a backward approach versus the rest of the industry. But I see machine-learning product developers going a variety of directions in trained and inference subsystems. At this point, there's no right way or wrong way.

 

 

There’s a lot more information in the EETimes article, so you might want to take a look for yourself.

 

 

 

 

As part of today’s reVISION announcement of a new, comprehensive development stack for embedded-vision applications, Xilinx has produced a 3-minute video showing you just some of the things made possible by this announcement.

 

Here it is:

 

 

Adam Taylor’s MicroZed Chronicles, Part 177: Introducing the reVision stack

by Xilinx Employee ‎03-13-2017 10:39 AM - edited ‎03-22-2017 07:19 AM (1,271 Views)

 

By Adam Taylor

 

Several times in this series, we have looked at image processing using the Avnet EVK and the ZedBoard. Along with the basics, we have examined object tracking using OpenCV running on the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PS (processing system) and using HLS with its video library to generate image-processing algorithms for the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL (programmable logic, see blogs 140 to 148 here).

 

Xilinx’s reVision is an embedded-vision development stack that provides support for a wide range of frameworks and libraries often used for embedded-vision applications. Most exciting, from my point of view, is that the stack includes acceleration-ready OpenCV functions.

 

Image1.jpg 

 

 

The stack itself is split into three layers. Once we select or define our platform, we will be mostly working at the application and algorithm layers. Let’s take a quick look at the layers of the stack:

 

  1. Platform layer: This is the lowest level of the stack and is the one on which the remaining stack layers are built. This layer includes platform definitions of the hardware and the software environment. Should we choose not to use a predefined platform, we can generate a custom platform using Vivado.

 

  1. Algorithm layer: Here we create our application using SDSoC and the platform definition for the target hardware. It is within this layer that we can use the acceleration-ready OpenCV functions along with predefined and optimized implementations for Customized Neural Network (CNN) developments such as inference accelerators within the PL.

 

  1. Application Development Layer: The highest layer of the stack. Development here is where high-level frameworks such as Caffe and OpenVX are used to complete the application.

 

As I mentioned above one of the most exciting aspects of the reVISION stack is the ability to accelerate a wide range of OpenCV functions using the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL. We can group the OpenCV functions that can be hardware-accelerated using the PL into four categories:

 

  1. Computation – Includes functions such as absolute difference between two frames, pixel-wise operations (addition, subtraction and multiplication), gradient, and integral operations
  2. Input Processing – Supports bit-depth conversions, channel operations, histogram equalization, remapping, and resizing.
  3. Filtering – Supports a wide range of filters including Sobel, Custom Convolution, and Gaussian filters.
  4. Other – Provides a wide range of functions including Canny/Fast/Harris edge detection, thresholding, SVM, HoG, LK Optical Flow, Histogram Computation, etc.

 

What is very interesting with these function calls is that we can optimize them for resource usage or performance within the PL. The main optimization method is specifying the number of pixels to be processed during each clock cycle. For most accelerated functions, we can choose to process either one or eight pixels. Processing more pixels per clock cycle reduces latency but increases resource utilization. Processing one pixel per clock minimizes the resource requirements at the cost of increased latency. We control the number of pixels processed per clock in via the function call.

 

Over the next few blogs, we will look more at the reVision stack and how we can use it. However in the best Blue Peter tradition, the image below shows the result of running a reVision Harris OpenCV acceleration function within the PL when accelerated.

 

 

Image2.jpg

 

 

Accelerated Harris Corner Detection in the PL

 

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg

 

Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge

by Xilinx Employee ‎03-13-2017 07:37 AM - edited ‎03-22-2017 07:19 AM (2,702 Views)

 

Image3.jpgToday, Xilinx announced a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference. It’s called the reVISION stack and it allows design teams without deep hardware expertise to use a software-defined development flow to combine efficient machine-learning and computer-vision algorithms with Xilinx All Programmable devices to create highly responsive systems. (Details here.)

 

The Xilinx reVISION stack includes a broad range of development resources for platform, algorithm, and application development including support for the most popular neural networks: AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN. Additionally, the stack provides library elements such as pre-defined and optimized implementations for CNN network layers, which are required to build custom neural networks (DNNs and CNNs). The machine-learning elements are complemented by a broad set of acceleration-ready OpenCV functions for computer-vision processing.

 

For application-level development, Xilinx supports industry-standard frameworks including Caffe for machine learning and OpenVX for computer vision. The reVISION stack also includes development platforms from Xilinx and third parties, which support various sensor types.

 

The reVISION development flow starts with a familiar, Eclipse-based development environment; the C, C++, and/or OpenCL programming languages; and associated compilers all incorporated into the Xilinx SDSoC development environment. You can now target reVISION hardware platforms within the SDSoC environment, drawing from a pool of acceleration-ready, computer-vision libraries to quickly build your application. Soon, you’ll also be able to use the Khronos Group’s OpenVX framework as well.

 

For machine learning, you can use popular frameworks including Caffe to train neural networks. Within one Xilinx Zynq SoC or Zynq UltraScale+ MPSoC, you can use Caffe-generated .prototxt files to configure a software scheduler running on one of the device’s ARM processors to drive CNN inference accelerators—pre-optimized for and instantiated in programmable logic. For computer vision and other algorithms, you can profile your code, identify bottlenecks, and then designate specific functions that need to be hardware-accelerated. The Xilinx system-optimizing compiler then creates an accelerated implementation of your code, automatically including the required processor/accelerator interfaces (data movers) and software drivers.

 

The Xilinx reVISION stack is the latest in an evolutionary line of development tools for creating embedded-vision systems. Xilinx All Programmable devices have long been used to develop such vision-based systems because these devices can interface to any image sensor and connect to any network—which Xilinx calls any-to-any connectivity—and they provide the large amounts of high-performance processing horsepower that vision systems require.

 

Initially, embedded-vision developers used the existing Xilinx Verilog and VHDL tools to develop these systems. Xilinx introduced the SDSoC development environment for HLL-based design two years ago and, since then, SDSoC has dramatically and successfully shorted development cycles for thousands of design teams. Xilinx’s new reVISION stack now enables an even broader set of software and systems engineers to develop intelligent, highly responsive embedded-vision systems faster and more easily using Xilinx All Programmable devices.

 

And what about the performance of the resulting embedded-vision systems? How do their performance metrics compare against against systems based on embedded GPUs or the typical SoCs used in these applications? Xilinx-based systems significantly outperform the best of this group, which employ Nvidia devices. Benchmarks of the reVISION flow using Zynq SoC targets against Nvidia Tegra X1 have shown as much as:

 

  • 6x better images/sec/watt in machine learning
  • 42x higher frames/sec/watt for computer-vision processing
  • 1/5th the latency, which is critical for real-time applications

 

Image1.jpg 

 

There is huge value to having a very rapid and deterministic system-response time and, for many systems, the faster response time of a design that's been accelerated using programmable logic can mean the difference between success and catastrophic failure. For example, the figure below shows the difference in response time between a car’s vision-guided braking system created with the Xilinx reVISION stack running on a Zynq UltraScale+ MPSoC relative to a similar system based on an Nvidia Tegra device. At 65mph, the Xilinx embedded-vision system’s response time stops the vehicle 5 to 33 feet faster depending on how the Nvidia-based system is implemented. Five to 33 feet could easily mean the difference between a safe stop and a collision.

 

 

Image2.jpg 

 

(Note: This example appears in the new Xilinx reVISION backgrounder.)

 

 

The last two years have generated more machine-learning technology than all of the advancements over the previous 45 years and that pace isn't slowing down. Many new types of neural networks for vision-guided systems have emerged along with new techniques that make deployment of these neural networks much more efficient. No matter what you develop today or implement tomorrow, the hardware and I/O reconfigurability and software programmability of Xilinx All Programmable devices can “future-proof” your designs whether it’s to permit the implementation of new algorithms in existing hardware; to interface to new, improved sensing technology; or to add an all-new sensor type (like LIDAR or Time-of-Flight sensors, for example) to improve a vision-based system’s safety and reliability through advanced sensor fusion.

 

Xilinx is pushing even further into vision-guided, machine-learning applications with the new Xilinx reVISION Stack and this announcement complements the recently announced Reconfigurable Acceleration Stack for cloud-based systems. (See “Xilinx Reconfigurable Acceleration Stack speeds programming of machine learning, data analytics, video-streaming apps.”) Together, these new development resources significantly broaden your ability to deploy machine-learning applications using Xilinx technology—from inside the cloud to the very edge.

 

 

You might also want to read “Xilinx AI Engines Steers New Course” by Junko Yoshida on the EETimes.com site.

 

 

 

On Thursday, March 30, two member companies from the IIConsortium (Industrial Internet Consortium)—Cisco and Xilinx—are presenting a free, 1-hour Webinar titled “How the IIoT (Industrial Internet of Things) Makes Critical Data Available When & Where it is Needed.” The discussion will cover machine learning and how self-optimization plays a pivotal role in enhancing factory intelligence. Other IIoT topics covered in the Webinar include TSN (time-sensitive networking), real-time control, and high-performance node synchronization. The Webinar will be presented by Paul Didier, the Manufacturing Solution Architect for the IoT SW Group at Cisco Systems, and Dan Isaacs, Director of Connected Systems at Xilinx.

 

Register here.

 

 

Last month, the European AXIOM Project took delivery of its first board based on a Xilinx Zynq UltraScale+ ZU9EG MPSoC. (See “The AXIOM Board has arrived!”) The AXIOM project (Agile, eXtensible, fast I/O Module) aims at researching new software/hardware architectures for Cyber-Physical Systems (CPS).

 

 

AXIOM Project Board Based on Zynq UltraScale MPSoC.jpg

 

 

AXIOM Project Board based on Xilinx Zynq UltraScale+ MPSoC

 

 

 

The board in fact presents the pinout of an Arduino Uno so you can attach an Arduino Uno-compatible shield to the board. The presence of the Arduino UNO pinout enables fast prototyping and exposes the FPGA I/O pins in a user-friendly manner.

 

Here are the board specs:

 

  • Wide boot capabilities: eMMC, Micro SD, JTAG
  • Heterogeneus 64-bit ARM FPGA Processor: Xilinx Zynq Ultrascale+ ZU9EG MPSoC
    • 64-bit Quad core A53 @ 1.2GHz
    • 32-bit Dual core R5 @ 500MHz
    • DDR4 @ 2400MT/s
    • Mali-400 GPU @ 600MHz
    • 600K System Logic Cells
  • Swappable SO-DIMM RAM (up to 32Gbytes) for the Processing System, plus a soldered 1Gbyte RAM for Programmable Logic
  • 12 GTH transceivers @ 12.5 Gbps (8 on USB Type C connectors + 4 on HS connector)
  • Easy rapid prototyping, because of the Arduino UNO pinout

 

You can see the AXIOM board for the first time during next week’s Embedded World 2017 at the SECO UDOO Booth, at the SECO booth, and at the EVIDENCE booth.

 

Please contact the AXIOM Project for more information.

 

 

 

 

With a month left in the Indiegogo funding period, the MATRIX Voice open-source voice platform campaign stands at 289% of its modest $5000 funding goal. MATRIX Voice is the third crowdfunding project by MATRIX Labs, based on Miami, Florida. The MATRIX Voice platform is a 3.14-inch circular circuit board capable of continuous voice recognition and compatible with the latest voice-based, cognitive cloud-based services including Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai, and Houndify. The MATRIX Voice board, based on a Xilinx Spartan-6 LX4 FPGA, is designed to plug directly onto a low-cost Raspberry Pi single-board computer or it can be operated as a standalone board. You can get one of these boards, due to be shipped in May, for as little as $45—if you’re quick. (Already, 61 of the 230 early-bird special-price boards are pledged.)

 

Here’s a photo of the MATRIX Voice board:

 

 

MATRIX Voice board.jpg

 

 

This image of the top of the MATRIX Voice board shows the locations for the seven rear-mounted MEMS microphones, seven RGB LEDs, and the Spartan-6 FPGA. The bottom of the board includes a 64Mbit SDRAM and a connector for the Raspberry Pi board.

 

Because this is the latest in a series of developer boards from MATRIX Labs (see last year’s project: “$99 FPGA-based Vision and Sensor Hub Dev Board for Raspberry Pi on Indiegogo—but only for the next two days!”), there’s already a sophisticated, layered software stack for the MATRIX Voice platform that include a HAL (Hardware Abstraction Layer) with the FPGA code and C++ library, an intermediate layer with a streaming interface for the sensors and vision libraries (for the Raspberry Pi camera), and a top layer with the MATRIX OS and high-level APIs. Here’s a diagram of the software stack:

 

 

MATRIX Voice Software Stack.jpg 

 

And now, who better to describe this project than the originators:

 

 

 

 

 

 

 

 

An article in the new January, 2017 issue of the IIC (Industrial Internet Consortium’s) Journal of Innovation titled “Making Factories Smarter Through Machine Learning” discusses the networked use of SoC-e’s CPPS-Gate40 intelligent IIoT gateway to help a car-parts manufacturer keep the CNC machines on its production lines up and running through predictive maintenance directed by machine-learning algorithms. These algorithms use real-time operational data taken directly from sensors on the CNC machines to identify and learn normal behavior patterns during the machining process so that when variances signaling an imminent failure occur, systems can be shut down gracefully and maintained or repaired before the failure becomes truly catastrophic (and really, really expensive thanks to any uncontrolled release of the kinetic energy stored as angular momentum in an operating CNC machine).

 

Catastrophic CNC machine failures can shut down a production line, causing losses worth hundreds of thousands of dollars (or more) in physical damage to tools and to work in process, in addition to the costs associated with lost production time. In one example cited in the article, a bearing in a CNC machine started to fail, as indicated by a large vibration spike. At that point, only the bearing needed replacement. Four days later, the bearing failed catastrophically damaging nearby parts and idling the production line for three shifts. There was plenty of warning (see image below) and preventative maintenance at the first indication of a problem would have minimized the cost of this single failed bearing.

 

 

CNC Failure.jpg 

 

Unfortunately, the data predicting the failure had been captured but not analyzed until afterwards because there was no real-time data collection-and-analysis system in place. What a needless waste.

 

The network based on SoC-e’s CPPS-Gate40 intelligent IIoT gateway discussed in this IIC Journal of Innovation article is designed to collect and analyze real-time operational information from the CNC machines including operating temperature and vibration data. This system performs significant data reduction at the gateway to minimize the amount of data feeding the machine-learning algorithms. For example, FFT processing shrinks the time-domain vibration data down to just a frequency and an amplitude, resulting in significant local data reduction. Temperature data varies more slowly and so it is sampled at a much lower frequency—variable-rate collection and fusion for different sensor data is another significant feature of this system. The full system then trains on the data collected by the networked IIoT gateways.

 

This is a simple and graphic example of the sort of return that companies can expect from properly implemented IIoT systems with the performance needed to operate real-time manufacturing systems.

 

SoC-e’s CPPS-Gate40 is based on a Xilinx Zynq SoC, which implements a variety of IIoT-specific, hard-real-time functions developed by SoC-e as IP cores for the Zynq SoC's programmable logic including the HSR/PRP/Ethernet switch (HPS), IEEE 1588-2008 Precision Time Protocol (see “IEEE 1588-2008 clock synchronization IP core for Zynq SoCs has sub-μsec resolution”), and real-time sensor data acquisition and fusion. SoC-e also uses the Zynq SoC to implement a variety of network security protocols. These are the sort of functions that require the flexibility of the Zynq SoC’s integrated programmable logic. Software-based implementations of these functions are simply impractical due to performance requirements.

 

For more information about the SoC-e IIoT Gateway, see “Intelligent Gateways Make a Factory Smarter” and “Big Data Analytics + High Availability Network = IIoT: SoC-e demo at Embedded World 2016.”

 

 

 

An Industrial Ethernet (IIoT) power supply reference design for the Xilinx Zynq-7000 SoC developed by Monolithic Power Systems (MPS) combines small footprint (0.45in2 of board real estate) with good efficiency (78% from a 12V input) and tight regulation. The design consists of six MPS regulators: three MPM3630 3A buck regulators, one MPM3610 1A buck regulator, and two LDO regulators to supply the twelve power rails needed by the Zynq SoC.

 

Here’s a simple block diagram of MPS’ reference design:

 

 

 

MPS IIoT Zynq SoC Ref Design.jpg

 

 

IIoT power supply reference design for the Zynq SoC from Monolithic Power Systems

 

 

 

And here’s a close-up photo of MPS’ compact IIoT power supply design prototype:

 

 

MPS IIoT Zynq SoC Ref Design Board Photo.jpg 

 

 

For more information about the MPS power supply reference including a BOM and data sheets for the various regulators used in the design, click here.

 

 

Late last year at the SPS Drives show in Germany, BE.services demonstrated a vertically integrated solution stack for industrial controls running on a Zynq-based Xilinx ZC702 Eval Kit. The BE.services industrial automation stack for Industry 4.0 and IIoT applications includes:

 

 

  • Linux plus OSADL real-time extensions
  • POWERLINK real-time, industrial Ethernet
  • CODESYS SoftPLC
  • Matrikon OPC UA machine-to-machine protocol for industrial automation
  • Ethernet TSN (time-sensitive networking) with hardware IP support from Xilinx
  • Kaspersky security

 

 

This stack delivers the four critical elements you need when developing smart manufacturing controllers:

 

  • Smart control
  • Real-time operation
  • Connectivity
  • Security

 

 

Industry 4.0 elements.jpg 

 

 

Here’s a 3-minute demo of that system with explanations:

 

 

 

 

The fundamental advantage to using a Xilinx Zynq SoC with its on-chip FPGA array for this sort of application is deterministic response, explains Dimitri Philippe in the video. Philippe is the founder and CEO of BE.services. Programmable logic delivers this response with hardware-level latencies instead of software latencies that are orders of magnitude slower.

 

 

Note: You can watch a recent Xilinx Webinar on the use of OPC UA and TSN for IIoT applications by clicking here.

 

 

 

The IIC (Industrial Internet Consortium) announced its first security assessment-focused testbed for Industrial IoT (IIoT) systems, the Security Claims Evaluation Testbed, in February 2016. This testbed provides an open, highly configurable cybersecurity platform for evaluating the security capabilities of endpoints, gateways, and other networked components. Data sources to the testbed can include industrial, automotive, medical, and other related endpoints.

 

IIC member companies have developed a common security framework and an approach to assessing cybersecurity in IIoT systems: the Industrial Internet Security Framework (IISF). The IIC’s Security Claims Testbed helps manufacturers improve the security posture of their products and verify alignment to the IISF prior to product launch, which helps shorten time to market.

 

If you’d like to hear about these topics in more detail, the IIC is presenting a free 1-hour Webinar on January 26. (Available later, on demand.) Register here.

 

Here’s a graphic depicting the IIC’s Security Claims Evaluation Testbed:

 

 

IIC Security Claims Testbed.jpg 

 

 

Note: Xilinx is one of the lead member companies involved in the development of the IIC Security Claims Testbed—others include Aicas, GlobalSign, Infineon, Real-Time Innovations, and UL (Underwriters Laboratories)—and if you look at the above graphic, you’ll see the SoC-e Gateway in the middle of everything. This gateway is based on a Xilinx Zynq SoC. For more information about the SoC-e IIoT Gateway, see “Intelligent Gateways Make a Factory Smarter” and “Big Data Analytics + High Availability Network = IIoT: SoC-e demo at Embedded World 2016.”

 

 

 

 

Yesterday, National Instruments (NI) along with 15 partners announced the grand opening of the new NI Industrial IoT Lab located at NI’s headquarters in Austin, TX. The lab is a working showcase for Industrial IoT technologies, solutions and systems architectures and will address challenges including interoperability and security in the IIoT space. The partner companies working with NI on the lab include:

 

  • Analog Devices
  • Avnu Alliance
  • Cisco Systems
  • Hewlett Packard Enterprise
  • Industrial Internet Consortium
  • Intel
  • Kalypso
  • OPC Foundation
  • OSIsoft
  • PTC
  • Real-Time Innovations
  • SparkCognition
  • Semikron
  • Viewpoint Systems
  • Xilinx

 

 

NI IIoT Lab Grand Opening with Jamie Smith.jpg

 

 

NI’s Jamie Smith (on the left), NI’s Business and Technology Director, opens the new NI Industrial IoT Lab in Austin, TX

 

Three key challenges to widespread IIoT adoption according to Xilinx’s Dan Isaacs

by Xilinx Employee ‎12-12-2016 11:44 AM - edited ‎12-12-2016 11:47 AM (2,141 Views)

RTC Magazine IIoT Issue.jpg 

A recent issue of RTC magazine carried an interview with Dan Isaacs, Director of Connected Systems at Xilinx. There are many cogent observations about the Industrial Internet of Things (IIoT) in this interview including the three key challenges to widespread IIoT adoption that Dan sees:

 

 

  • Security (overcoming companies’ concerns about connecting their systems and making them accessible over the internet)
  • Standardization – considering the infrastructure already in place at a given facility, and costs to change existing connectivity approaches
  • Data ownership – who owns the data once connected?

 

 

Where’s the path to meet these challenges?

 

Quoting Dan from the RTC Magazine article:

 

“The Industrial Internet Consortium (IIC), a highly collaborative 200+ member strong global consortium of companies, is working on several approaches through reference architectures, security frameworks, and proof of concept test beds to identify and bring innovative methodologies and solutions to address these and other IIoT challenges.”

 

For more Dan Isaacs IIoT insights like this, see the full interview on page 6 of the magazine.

 

 

 

 

Ask any expert in IIoT (Industrial Internet of Things) circles what the most pressing IIoT problem might be and you will undoubtedly hear “security.” Internet hacking stories are rampant. You generally hear about one a day. With the IoT and IIoT ramping up, they’re going to get more frequent. Over the recent Thanksgiving Weekend, the ticket machines and fare-collection system of San Francisco’s Muni light-rail, mass-transit system was hacked by ransomware. Agents' computer screens displayed the message "You Hacked, ALL Data Encrypted" beginning Friday night. The attackers demanded 100 Bitcoins, worth about $73,000, to undo the damage. Things were restored by Sunday without paying the ransom and Muni provided free rides until the system could be recovered.

 

You do not want to let this happen to your IIoT system design.

 

How to prevent it? Today (by sheer coincidence, honest), Avnet announced a new security module for its MicroZed IIoT (Industrial Internet of Things Starter Platform), which is based on a Xilinx Zynq Z-7000 SoC. The new Avnet Trusted Platform Module Security PMOD places an Infineon OPTIGA TPM (Trusted Platform Module) SLB9670 on a very small plug-in board conforming to the Digilent PMOD peripheral module format. The Infineon TPM SLB9670 is a secure microprocessor that adds hardware security to any system by conforming to the TPM security standard developed by the Trusted Computing Group, an international industry standardization group.

 

The $29.95 Avnet Trusted Platform Module Security PMOD is essentially a SPI security peripheral that provides many security services to your design based on Trusted computing Group standards. Provided services include:

 

 

  • Strong authentication of platform and users using a unique embedded endorsement certificate
  • Secure storage and management of keys and data
  • Measured and trusted booting for embedded systems
  • Random-number generation, tick counting to trigger the generation of new random numbers, and a dictionary-attack lockout
  • RSA, ECC, and SHA-256 encryption

 

 

That’s a lot of security in a 32-pin package and, for development purposes, you can get it on a $30 plug-in PMOD along with a reference design for using the module with the Zynq-based Avnet MicroZed IIoT Starter Kit.

 

So if you don’t want to see this in your IIoT system:

 

 

You Hacked.jpg 

 

Then think about buying this:

 

 

Avnet MicroZed IIoT TPM Module.jpg

 

Avnet MicroZed IIoT TPM PMOD

 

 

 

 

 

 

Programmable logic control of power electronics—where to start? What dev boards to use?

by Xilinx Employee ‎10-18-2016 10:24 AM - edited ‎10-18-2016 10:28 AM (4,177 Views)

 

A great new blog post on the ELMG Web site discusses three entry-level dev boards you can use to learn about controlling power electronics with FPGAs. (This post follows a Part 1 post that discusses the software you can use—namely Xilinx Vivado HLS and SDSoC—to develop power-control FPGA designs.)

 

And what are those three boards? They should be familiar to any Xcell Daily reader:

 

 

The $99 Digilent ARTY dev board (Artix-7 FPGA)

 

ARTY Board v2 White.jpg 

 

 

 

The Avnet ZedBoard (Zynq Z-7000 SoC)

 

ZedBoard V2.jpg

 

 

 

 

 

The Avnet MicroZed SOM (Zynq Z-7000 SoC)

 

 

MicroZed V2.jpg

 

 

 

 

Who is ELMG? They’ve spent the last 25 years developing digitally controlled power converters in motor drives, industrial switch mode power supplies, reactive power compensation, medium voltage system, power quality systems, motor starters, appliances and telecom switch-mode power supplies.

 

 

For more information about the ARTY board, see: ARTY—the $99 Artix-7 FPGA Dev Board/Eval Kit with Arduino I/O and $3K worth of Vivado software. Wait, What????

 

 

For more information about the MicroZed and the ZedBoard, see the 150+ blog posts in Adam Taylor’s MicroZed Chronicles.

 

 

 

MATRIX Labs bills the $99 MATRIX Creator dev board for the Raspberry Pi, listed on Indiegogo, as a “hardware bombshell.” A more precise description would be and FPGA-accelerated sensor hub sporting a massive array of on-board sensors. It’s a one stop shop for prototyping IoT and industrial IoT (IIoT) devices using the Raspberry Pi board as a base.

 

Here’s a top-and-bottom photo of the board:

 

 
MATRIX Creator Dev Board.jpg

 

MATRIX Creator dev board for the Raspberry Pi

 

 

Note: That square hole in the center of the board allows the Raspberry Pi’s Camera Module to peek through.

 

Here’s a detailed list of the various components on the MATRIX Creator board and its capabilities:

 

 

MATRIX Creator Dev Board Components.jpg

 

 

MATRIX Creator dev board Components and Capabilities

 

 

 

Note that one of those components is a Xilinx Spartan-6 LX4 FPGA, which makes a very fine low-cost sensor hub capable of operating in real time. No doubt you’d like to see how the FPGA fits into this board. MATRIX Labs has that covered with this block diagram:

 

 

MATRIX Creator Dev Board Block Diagram.jpg

 

 

MATRIX Creator dev board Block Diagram

 

 

 

MATRIX Labs has also developed supporting tools and software for the MATRIX Creator dev board including MATRIX OS, MATRIX CV (a computer-vision library), and MATRIX CLI (a sensor-hub application). More software is already being developed.

 

Unlike many crowd-funded projects, MATRIX Creator is already shipping so you’re assured of getting a board, according to MATRIX Labs, but you only have two days left in the funding period. So check it out now.

 

 

MATRIX Creator: IoT Computer Vision Dev Board

 

 

A novel development by Zhao Tian, Kevin Wright, and Xia Zhou at Dartmouth College encodes data streams on sparse, ultrashort pulses and transmits these pulses optically using low-cost, visible LED luminaires designed for room illumination. (See “The DarkLight Rises: Visible Light Communication in the Dark”) The optical light pulses are hundreds of nanoseconds long so they’re far too short—by four or five orders of magnitude—to be perceived as light by human vision. However, they’re long enough to be captured by an inexpensive photodiode and are therefore useful for digital communications—albeit slow communications, on the order of 1.6 to 1.8 kbits/sec. Nevertheless, that rate meets a number of low-speed communications needs including many IoT and industrial IoT requirements.

 

Signal encoding employs OPPM (overlapping pulse position modulation), implemented in a $149 Xilinx Artix-7 A35T FPGA on a Digilent Basys 3 FPGA Trainer Board.

 

 

Digilent Basys 3 Artix-7 FPGA Trainer Board.jpg

 

Digilent Basys 3 Artix-7 FPGA Trainer Board

 

 

Here’s a short 1-minute video giving you an ultrashort overview of the project:

 

 

 

 

 

If you’re developing products in the IIoT (Industrial Internet of things) space, you’ll likely want to sign up for a free 1-hour Webinar that Xilinx’s Director of Strategic Marketing for Industrial IoT Dan Isaacs will present on October 12 in conjunction with Automation World. Dan will be discussing the history of the IIoT starting with the old way of doing things using islands of automation to today, where everything in the factory delivers data via a network to a central repository for analysis, action, and optimization.

 

Register here.

 

 

 

Xcell Daily has covered several Photonfocus industrial video cameras and all of them have been based on a Xilinx Spartan-6 FPGA vision platform developed by Photonfocus that serves as a base for numerous industrial video cameras. One of the advantages that the Spartan-6 FPGA provides is the ability to adapt to nearly any sort of imaging sensor—monochrome, color, or hyperspectral—through reprogramming of the interface I/O and the on-board video processing. Another advantage the FPGA provides is the ability to go fast in video-processing applications.

 

Photonfocus used this latter capability to develop the DR1 family of double-rate GigE video cameras a while ago (the company recently announced quad-rate QR1 GigE industrial cameras based on the same FPGA platform) and a new article written by Andrew Wilson in Vision Systems Design Magazine details the use of Photonfocus DR1 double-rate cameras for a high-speed, visual-inspection system developed by M3C Industrial Automation and Vision. This system can inspect and sort 25,000 cork stoppers per hour using the Photonfocus DR1 camera’s extreme 1800 frames/sec capability.

 

The camera employed in this application is the Photonfocus DR1-D2048x1088C-192-G2-8 high-speed color camera. Used in line mode, the camera captures an 896x100-pixel region of interest (ROI) at an astounding 1800 frames/sec. The corks pass through two imaging stations that employ structured light generated by an Effilux LED 3D projector. The LED projector’s bright white line appears across the cork stopper within the camera’s ROI and the bright illumination ensures color fidelity. Color is an important sorting criterion.

 

 

Photonfocus DR1 Industrial Video Camera.jpg

 

The Photonfocus DR1 double-rate GigE camera family is based on the Xilinx Spartan-6 FPGA

 

 

Corks pass through two such inspection stations. The first imaging station performs a 3D analysis of the cork stoppers using an attached PC running software written in C++ that assembles 3D images from the captured frames as the corks rapidly pass by on a conveyer. Then the corks are flipped before passing through a second imaging station that inspects the stoppers’ reverse side. The system looks for defects in the stoppers including unacceptably large holes, superficial deformations, incorrect size, and color imperfections. The system uses this information to grade and sort the good stoppers and to discard rejects.

 

Numerous cork manufacturers across Europe have already adopted this high-speed inspection system.

 

Want to see the M3C Industrial Automation and Vision cork-inspection system in action? Thought so. Here’s Andrew Wilson’s video:

 

 

 

 

 

 

The only way this system could be better would be for it to be inspecting chocolate-chip cookies, and I get the rejects.

 

 

Other Xcell Daily blog posts about Photonfocus video cameras based on the Spartan-6 FPGA include:

 

 

 

 

 

 

 

 

Although the Xilinx Spartan-6 FPGA family is now more than half a decade old, it continues to demonstrate real value as a cost-effective foundation for many new video and vision platforms.

 

Analog Devices (ADI) introduced the AD9371 Integrated, Dual Wideband RF Transceiver back in May as part of its “RadioVerse.” You use the AD9371 for building extremely flexible, digital radios with operating frequencies of 300MHz to 6GHz, which covers most of the licensed and unlicensed cellular bands. The IC supports receiver bandwidths to 100MHz. It also supports observation receiver and transmit synthesis bandwidths to 250MHz, which you can use to implement digital correction algorithms.

 

Last week, the company started shipping FMC eval cards based on the AD9371: the ADRV9371-N/PCBZ and ADRV9371-W/PCBZ.

 

 

AD9371 Eval Board.jpg

 

ADRV9371-N Eval Board for the Analog Devices AD9371 Integrated Wideband RF Transceiver

 

 

ADI was showing one of these new AD9371 Eval Boards in operation this week at the GNU Radio Conference held in Boulder, Colorado. The board was plugged into the FMC connector on a Xilinx ZC706 Eval Kit, which is based on a Xilinx Zynq Z7045 SoC. The Xilinx Zynq SoC and the AD9371 make an extremely powerful design combination for developing all sorts of SDRs (software-defined radios).

 

 

 

 

Here’s a large-scale problem literally too hot to handle:

 

Global steelmaker and mining company ArcelorMittal has a 24/7 production line that continuously welds steel sheets unwound from giant coils in real time. If any part of the continuous weld tears due to a weak spot, the line quickly shuts down. If the tear occurs in an oven, there’s a 4-day cool-down period before the problem can be addressed. The cost of that delay is about €1 million in lost production, repair, and downtime. Something best avoided even if it only happens once or twice a year.

 

ArcelorMittal asked V2i, a Belgian engineering company specializing in structural dynamics, to address this production problem. V2i developed a reliable, autonomous system that detects low-quality welds and quickly warns the operator so there’s sufficient time to restart the welding process before the weld tears.

 

The system needed to monitor several real-time parameters including steel hardness, thickness, welding speed, and welding current. In the final design, a non-contact profilometer measures the weld shape precisely and quickly; a pyrometer measures the weld bead’s temperature; and several more sensors monitor parameters including pressures, current, voltage, positions, and speed. V2i used a National Instruments (NI) CompactRIO cRIO-9038 controller with appropriate snap-in I/O cards to collect and condition all of the sensor signals.

 

The NI cRIO-9038 controller—programmed with a combination of NI's LabVIEW System Design Software, LabVIEW Real-Time, and LabVIEW FPGA—has sufficient processing power to handle the complex tasks of combining all of the real-time sensor data and producing a go/no-go status display while ongoing result data feeds ArcelorMittal’s online data base via Ethernet. Processing power inside of the cRIO-9038 is provided by a combination of a dual-core, 1.33GHz Intel Atom processor and a Xilinx Kintex-7 160T FPGA.

 

Here’s a video of the welder in action:

 

 

 

 

Note: This project won the 2016 Intel Internet of Things Award and was a finalist in the Industrial Machinery and Control category at this month’s NI Engineering Impact Awards during NI Week held in Austin, Texas. You can read more details in the full case study here.

 

Sagemcom Technology NRJ needed to increase demand-based production volume and improve first-pass yield on an energy meter manufacturing line in Tunisia while maintaining high product quality and 99% up time on the test line. These meters are used to create Smart Grids. The existing meter test system, based on a Windows PC, was not up to the job so Sagemcom Technology turned to the National Instruments (NI) CompactRIO cRIO-9030 controller programmed with NI’s LabVIEW System Design Software. The controller’s integrated Xilinx Kintex-7 70T FPGA, which provided the real-time control and high-speed image processing required by the improved production-test program, can be programmed with NI’s LabVIEW FPGA and FPGA-based LabVIEW Vision Development Module. The result: a 38% improvement in test time, 50% cost reduction for the controller, 17% improvement in productivity, and 70% LabVIEW code reuse. Those are truly excellent numbers.

 

Testing takes place in a 4-station robotic testing cell with two robots to move the meters from one test station to the next with each station conducting different tests including a functional test, a machine-vision test to verify proper operation of the energy meter’s LCD, a 3.2KV HiPot test to verify fault immunity, and a test of the meter’s power-line communications capability. The NI CompactRIO controls the robot arm, runs the tests, and reports the results to a server via Ethernet.

 

Here’s a nicely done, minute-long video of the robotic test system in action:

 

 

 

 

Note: This project won in the Industrial Machinery and Control category at this month’s NI Engineering Impact Awards during NI Week held in Austin, Texas. You can read more details in the full case study here.

 

 

 

 

A few days ago at NI Week in Austin, National Instruments showed a new member of its CompactRIO platform with integrated WiFi (dual-band, 802.11 a/b/g/n) to support wireless data acquisition and control. (Think IIoT.) The new box is the 8-slot cRIO-9037 and it combines WiFi capability with a 1.33GHz dual-core Arom processor and a Xilinx Kintex-7 160T FPGA. Of course, it’s compatible with NI’s LabVIEW System Design Software.

 

 

cRIO-9037.jpg

 

National Instruments cRIO-9037 with integrated WiFi

PFP Cybersecurity broadens access to Zynq-based, power-based, tamper-detection technology with Wistron deal

by Xilinx Employee ‎07-29-2016 11:17 AM - edited ‎09-14-2016 02:55 PM (17,731 Views)

 

This week, PFP Cybersecurity announced that it has teamed with Wistron Corp to broaden design access to PFP’s power-based, tamper-detection technology. PFP’s CEO Steven Chen said:

 

“In the future everything will be attacked. How does one protect a device with limited resources against hardware, firmware, configuration and data hacks during the whole life cycle?”

 

PFP’s cybersecurity technology takes fine-grained measurements of a processor’s power consumption and detects anomalies using base references from trusted software sources, machine learning, and data analytics. Because it only monitors power consumption, it’s impossible for intruders to detect its presence.

 

The original demo of this technology appeared in the Xilinx booth running on a platform based on a Xilinx Zynq SoC at last year’s ARM TechCon. (See “Zynq-based PFP eMonitor brings power-based security monitoring to embedded systems.”)

 

Here’s a 1-minute video with a very brief overview of PFP’s technology:

 

 

 

 

 

This week’s announcement by PFP and Wistron makes PFP’s cybersecurity technology available to Wistron’s customers. Wistron is an ODM (original device manufacturer); it was originally Acer’s manufacturing arm in Taiwan but has been an independent company since the year 2000 with operations in Asia, Europe, and North America. The company currently develops and manufactures a range of electronic products including notebook PCs, desktop PCs, servers and storage systems, LCD TVs, information appliances, handheld devices, networking and communication products, and IoT devices for a variety of clients. Wistron's customers can outsource some or all of their product development tasks and this week’s announcement allows Wistron to incorporate PFP’s cybersecurity technology into new designs.

 

The MV1-D2048x1088-HS03-96-G2 GigE hyperspectral camera from Photonfocus images 2048x1088-pixel video at 42fps in 16 spectral bands from 470nm to 630nm (the blue through orange part of the visible spectrum) using an IMEC snapshot mosaic CMV2K-SM4X4-470-630-VIS CMOS hyperspectral image sensor. The camera images each spectral band at 512X256 pixels. This new Photonfocus GigE hyperspectral video camera complements the Photonfocus MV1-D2048x1088-HS02-96-G2 GigE hyperspectral video camera introduced last year, which images the 600nm to 975nm part of the visible spectrum in 25 spectral bands. (See “Hyperspectral GigE video cameras from Photonfocus see the unseen @ 42fps for diverse imaging applications.”)

 

 

Photonfocus MV1-D2048x1088-HS03-96-G2 GigE hyperspectral camera and sensor.jpg

 

Photonfocus MV1-D2048x1088-HS03-96-G2 GigE hyperspectral video camera and IMEC image sensor

 

 

 

Here’s a spectral sensitivity plot for this new hyperspectral camera:

 

 

 

Photonfocus MV1-D2048x1088-HS03-96-G2 GigE hyperspectral camera spectral plot.jpg

 

Photonfocus MV1-D2048x1088-HS03-96-G2 GigE hyperspectral video camera spectral sensitivity plot

 

 

Not coincidentally, the new Photonfocus hyperspectral camera is based on a Xilinx Spartan-6 FPGA vision platform—the same vision platform the company developed and used in last year’s MV1-D2048x1088-HS02-96-G2 GigE hyperspectral video camera. The FPGA-based platform gives the company maximum ability to adapt to new image sensor types as they appear. The Spartan-6 FPGA allows the engineers at Photonfocus to more easily adapt to radically different sensor types with different pin-level interfaces, bit-level interface protocols, and overall image-processing requirements.

 

Here, the requirements of the 16-band hyperspectral image sensor versus a 25-band hyperspectral image sensor clearly require significantly different image processing. Using an FPGA-based platform like the one based on a Spartan-6 FPGA that Photonfocus is using allows you to turn new products quickly and to capitalize on developments such as the introduction of new image sensors. This latest camera in the growing line of Photonfocus cameras is proof.

 

Contact Photonfocus for information about these GigE hyperspectral cameras.

 

 

New Video: Vision-based industrial robotics demo runs on Zynq UltraScale+ MPSoC, uses the WHOLE chip

by Xilinx Employee ‎07-13-2016 11:44 AM - edited ‎08-08-2016 03:13 PM (7,260 Views)

 

OK, it’s a solitaire-playing robot based on a delta-configured 3D-printer chassis, but this demo of the Zynq UltraScale+ MPSoC thoroughly uses the device’s full capabilities to perform all of the following tasks on one chip:

 

  • Video and image processing
  • Object detection and recognition
  • Algorithm-based decision making
  • Motion path selection
  • Motor-drive control
  • Safety-critical event detection and safe shutdown
  • GUI for user interaction, status, and control
  • System configuration and security management

 

The demo distributes these tasks across all of the computing and processing elements in the Zynq UltraScale+ MPSoC including:

 

  • Four ARM Cortex-A53 application processors
  • Two ARM Cortex-R5 real-time processors operating in lockstep for safety-critical tasks
  • ARM Mali-400 GPU
  • Xilinx UltraScale+ programmable logic fabric

 

The Zynq UltraScale+ MPSoC’s programmable logic fabric handles Input from the demo system’s camera, which requires high-speed processing. Letting the on-chip FPGA fabric handle the direct video processing is far more practical than using software-driven microprocessors with respect to power-consumption and performance. However, the on-chip ARM Cortex-A53 processors are ideal for image recognition and decision making based on the recognized objects.

 

The dual-core ARM Cortex-R5 real-time processors handle motor control. The operate in lockstep because this is a safety-critical operation. Any time there’s metal in motion that could cause accidental injury, you have a safety-critical task on your hands.

 

Finally, the Mali-400 handles the GUI’s graphics and the video inset overlay.

 

Quite a lot packed into that one chip. Maybe this system looks a lot like one you need to design.

 

Here’s the 4-minute video:

 

 

 

 

For more information on the Zynq UltraScale+ MPSoC and additional technical details about this demo, see Glenn Steiner's blog post "Heterogeneous Multiprocessing: What Is It and Why Do You Need It?"

 

 

 

 

Today, National Instruments (NI) launched its 2nd-generation PXIe-5840 Vector Signal Transceiver (VST), which combines a 6.5GHz RF vector signal generator and a 6.5GHz vector signal analyzer in a 2-slot PXIe module. The instrument has 1GHz of instantaneous bandwidth and is designed for use in a wide range of RF test systems including 5G and IoT RF applications, ultra-wideband radar prototyping, and RFIC testing. Like all NI instruments, the PXIe-5840 VST is programmable with the company’s LabVIEW system-design environment and that programmability reaches all the way down to the VST’s embedded Xilinx Virtex-7 690T FPGA. (NI’s 1st-generation VSTs employed Xilinx Virtex-6 FPGAs.)

 

 

NI 2nd Generation VST Virtex-7 Internal Detail.jpg 

 

National Instruments uses this FPGA programmability to create varied RF test systems such as this 8x8 MIMO RF test system:

 

 

NI 2nd Generation VST MIMO Test Systeml.jpg

 

 

 

And this mixed-signal IoT test system:

 

 

NI 2nd Generation VST IoT Test Systeml.jpg

 

 

 

For additional information on NI’s line of VSTs, see:

 

 

 

 

 

 

 

ELMG’s Dlog IP and ControlScope app provide deep insight into FPGA-based power conversion for Zynq SoCs

by Xilinx Employee ‎07-11-2016 11:50 AM - edited ‎07-11-2016 11:52 AM (7,176 Views)

 

When you’re first developing the algorithms to use programmable logic for controlling a lot of power, things can go south very quickly making sophisticated diagnostics is pretty handy. ELMG Digital Power, a New Zealand consultancy focused on power control, has developed a sophisticated monitoring capability in the form of a data-collection IP block called Dlog that you can use to collect data from within a Xilinx Zynq-7000 SoC and a companion application called ControlScope that allows you to visualize significant events occurring in your power-control design. There’s a new blog posted on ELMG’s Web site that describes these two products.

 

Combined, Dlog and ControlScope allow you to collect and analyze large amounts of data so that you can detect power-control problems such as:

 

  • single sample errors
  • clipping
  • overflow
  • underflow or precision loss and
  • bursty instability due to precision loss

 

Data can be logged to on-board Flash memory cards at a high data rate. It can also be transferred over Ethernet to a PC running at 25Mbytes/sec.

 

This is probably a good time to remind you of tomorrow’s free ELMG Webinar on Zynq-specific power control presented by Dr. Tim King, ELMG Digital Power’s Principal FPGA Engineer. Key digital power control questions to be answered in this Webinar include:

 

 

  • What is important in digital power, including numeric precision and latency?
  • How do you design a compensator in the digital domain?
  • Why you would use a FPGA for digital power and why the Zynq-7000 SoC in particular?
  • What are the key issues for digital controllers based on programmable logic including the serial-parallel trade-off, choosing between fixed- and floating-point math, determining the needed precision, and selecting sample rates?
  • What building blocks are available for digital control including ELMG’s licensable IP cores?
  • How can you use the Zynq SoC’s dual-core, 32-bit ARM Cortex-A9 MPCore processor to full advantage for power control?

 

  • The Webinar will also discuss IIR digital filter design in a case study, along with understanding the delta operator

 

There are a limited number of spots for this Webinar and I received an email today from ELMG that said only 150 spots remained.

 

Register here.

 

 

 

 

ELMG Logo.jpgFor most of us, precise power control is dark-arts stuff. As it says on digital power specialist ELMG Digital Power’s Web site, “New applications need new [power] converters and power engineers are in short supply.”

 

Digital power control increases performance, improves efficiency, and reduces heat—which eases the thermal-management challenges. Often, off-the-shelf converters do not suit the application perfectly because of space and performance constraints. You need custom solutions to meet these specialized power-control needs and specialized power converters are really difficult to find; high-voltage power converters are even more difficult to find.

 

ELMG specializes in these particular arts. The company’s mission is to design and build the world’s best digitally controlled electronic power systems. As part of this mission, the company has been teaching general power-control Webinars and the company now has one that’s specific to the Xilinx Zynq-7000 SoC. This new Webinar will be presented by Dr. Tim King, ELMG Digital Power’s Principal FPGA Engineer. It will be held three times on July 12 designed to be convenient for worldwide audiences (Asia/Pacific, Europe, and US/Americas).

 

Key digital power control questions to be answered in this Webinar include:

 

  • What is important in digital power, including numeric precision and latency?
  • How do you design a compensator in the digital domain?
  • Why you would use a FPGA for digital power and why the Zynq-7000 SoC in particular?
  • What are the key issues for digital controllers based on programmable logic including the serial-parallel trade-off, choosing between fixed- and floating-point math, determining the needed precision, and selecting sample rates?
  • What building blocks are available for digital control including ELMG’s licensable IP cores?
  • How can you use the Zynq SoC’s dual-core, 32-bit ARM Cortex-A9 MPCore processor to full advantage for power control?

 

  • The Webinar will also discuss IIR digital filter design in a case study, along with understanding the delta operator

 

For answers to these questions, sign up for ELMG’s Zynq-based power-control Webinar. ELMG is offering this Webinar at no charge.

 

Register here.

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.