Plethora IIoT develops cutting‑edge solutions to Industry 4.0 challenges using machine learning, machine vision, and sensor fusion. In the video below, a Plethora IIoT Oberon system monitors power consumption, temperature, and the angular speed of three positioning servomotors in real time on a large ETXE-TAR Machining Center for predictive maintenance—to spot anomalies with the machine tool and to schedule maintenance before these anomalies become full-blown faults that shut down the production line. (It’s really expensive when that happens.) The ETXE-TAR Machining Center is center-boring engine crankshafts. This bore is the critical link between a car’s engine and the rest of the drive train including the transmission.
Plethora uses Xilinx Zynq SoCs and Zynq UltraScale+ MPSoCs as the heart of its Oberon system because these devices’ unique combination of software-programmable processors, hardware-programmable FPGA fabric, and programmable I/O allow the company to develop real-time systems that implement sensor fusion, machine vision, and machine learning in one device.
Initially, Plethora IIoT’s engineers used the Xilinx Vivado Design Suite to develop their Zynq-based designs. Then they discovered Vivado HLS, which allows you to take algorithms in C, C++, or SystemC directly to the FPGA fabric using hardware compilation. The engineers’ first reaction to Vivado HLS: “Is this real or what?” They discovered that it was real. Then they tried the SDSoC Development Environment with its system-level profiling, automated software acceleration using programmable logic, automated system connectivity generation, and libraries to speed programming. As they say in the video, “You just have to program it and there you go.”
Here’s the video:
Plethora IIoT is showcasing its Oberon system in the Industrial Internet Consortium (IIC) Pavilion during the Hannover Messe Show being held this week. Several other demos in the IIC Pavilion are also based on Zynq All Programmable devices.
What do you do if you want to build a low-cost state-of-the-art, experimental SDR (software-defined radio) that’s compatible with GNURadio—the open-source development toolkit and ecosystem of choice for serious SDR research? You might want to do what Lukas Lao Beyer did. Start with the incredibly flexible, full-duplex Analog Devices AD9364 1x1 Agile RF Transceiver IC and then give it all the processing power it might need with an Artix-7 A50T FPGA. Connect these two devices on a meticulously laid out circuit board taking all RF-design rules into account and then write the appropriate drivers to fit into the GNURadio ecosystem.
Sounds like a lot of work, doesn’t it? It’s taken Lukas two years and four major design revisions to get to this point.
Well, you can circumvent all that work and get to the SDR research by signing up for a copy of Lukas’ FreeSRP board on the Crowd Supply crowd-funding site. The cost for one FreeSRP board and the required USB 3.0 cable is $420.
Lukas Lao Beyer’s FreeSRP SDR board based on a Xilinx Artix-7 A50T FPGA
With 32 days left in the Crowd Supply funding campaign period, the project has raised pledges of a little more than $12,000. That’s about 16% of the way towards the goal.
There are a lot of well-known SDR boards available, so conveniently, the FreeSRP Crowd Supply page provides a comparison chart:
If you really want to build your own, the documentation page is here. But if you want to start working with SDR, sign up and take delivery of a FreeSRP board this summer.
On April 11, the third, free Webinar in Xilinx's "Precise, Predictive, and Connected Industrial IoT" series will provide insight into the role of Zynq All Programmable SoCs in the breath of applications across IIoT Edge and the connectivity between them. A brief summary of IIoT trends will be presented, followed by an overview of the Data Distribution Service (DDS) IIoT databus standard presented by RTI, the IIoT Connectivity Company, and how DDS and OPC-UA target different connectivity challenges in IIoT systems.
Webinar attendees will also learn:
Adam Taylor and Xilinx’s Sr. Product Manager for SDSoC and Embedded Vision Nick Ni have just published an article on the EE News Europe Web site titled “Machine learning in embedded vision applications.” That title’s pretty self-explanatory, but there are a few points I’d like to highlight. Then you can go read the full article yourself.
As the article states, “Machine learning spans several industry mega trends, playing a very prominent role within not only Embedded Vision (EV), but also Industrial Internet of Things (IIoT) and Cloud Computing.” In other words, if you’re designing products for any embedded market, you might well find yourself at a competitive disadvantage if you’re not adding machine-learning features to your road map.
This article closely ties machine learning with neural networks (including Feed-forward Neural Networks (FNNs), Recurrent Neural Networks (RNNs), and Deep Neural Networks (DNNs), and Convolutional Neural Networks (CNNs)). Neural networks are not programmed; they’re trained. Then, if they’re part of an embedded design, they’re deployed. Training is usually done using floating-point neural-network implementations but, for efficiency (power and cost), deployed neural networks can use fixed-point representations with very little or no loss of accuracy. (See “Counter-Intuitive: Fixed-Point Deep-Learning Inference Delivers 2x to 6x Better CNN Performance with Great Accuracy.”)
The programmable logic inside of Xilinx FPGAs, Zynq SoCs, and Zynq UltraScale+ MPSoCs is especially good at implementing fixed-point neural networks, as described in this article by Nick Ni and Adam Taylor. (Go read the article!)
Meanwhile, this is a good time to remind you of the recent Xilinx introduction of the reVISION stack for neural network development using Xilinx All Programmable devices. For more information about the Xilinx reVISION stack, see:
In yesterday’s EETimes article titled “How will Ethernet go real-time for industrial networks?,” author Richard Wilson interviews National Instruments’ Global Technology and Marketing Director Rahman Jamal about using OPC-UA (the OPC Foundation’s Unified Architecture) and TSN (time-sensitive networking) to build industrial Ethernet networks (IIoT/Industrie 4.0) that deliver real-time response. (Yes, yes, yes, “real-time” is a loosely defined term where “real” depends on your system’s temporal reality.) As Jamal states in the interview, some constrained industrial Ethernet network topologies need no help to achieve real-time operation. In other cases and for other topologies, you need Ethernet implementations that are “heavily modified at the hardware level to achieve performance.”
One of the hardware additions that can really help is the hardware implementation of the IEEE 1588v2 PTP (Precision Time Protocol) clock-synchronization standard. PTP permits each piece of network-connected equipment to be synchronized using a 64-bit timer, which can be used for time-stamping, synchronization, control and as a common time reference to implement TSN.
PTP implementation is an ideal task for an IP block instantiated in programmable logic (see last year’s Xcell Daily blog post “Intelligent Gateways Make a Factory Smarter,” written by SoC-e (System on Chip engineering) founder and CEO Armando Astarloa). SoC-e has implemented just such an IEEE 1588v2 PTP IP core in a Xilinx Zynq SoC, which is the core logic device inside of the company’s CPPS-Gate40 Sensor intelligent IIoT gateway. (Note: Software PTP implementations are neither fast nor deterministic enough for many IIoT applications.)
SoC-e CPPS-Gate40 Sensor intelligent IIoT gateway
You can see the SoC-e PTP IP core in the very center of this CPPS-Gate40 block diagram:
SoC-e CPPS-Gate40 Sensor intelligent IIoT gateway block diagram
According to the SoC-e Web page, the company’s IEEE 1588v2 IP core in the CPPS-Gate40 Sensor gateway can deliver sub-microsecond network synchronization. How is such a small number possible? As Jamal says in his EETimes’ interview, “bit times (time on the wire) for a 64-byte frame at GigE rates is 512ns.” That’s how.
I did not go to Embedded World in Nuremberg this week but apparently SemiWiki’s Bernard Murphy was there and he’s published his observations about three Zynq-based reference designs that he saw running in Aldec’s booth on the company’s Zynq-based TySOM embedded dev and prototyping boards.
Aldec TySOM-2 Embedded Prototyping Board
Murphy published this article titled “Aldec Swings for the Fences” on SemiWiki and wrote:
“At the show, Aldec provided insight into using the solution to model the ARM core running in QEMU, together with a MIPI CSI-2 solution running in the FPGA. But Aldec didn’t stop there. They also showed off three reference designs designed using this flow and built on their TySOM boards.
“The first reference design targets multi-camera surround view for ADAS (automotive – advanced driver assistance systems). Camera inputs come from four First Sensor Blue Eagle systems, which must be processed simultaneously in real-time. A lot of this is handled in software running on the Zynq ARM cores but the computationally-intensive work, including edge detection, colorspace conversion and frame-merging, is handled in the FPGA. ADAS is one of the hottest areas in the market and likely to get hotter since Intel just acquired Mobileye.
“The next reference design targets IoT gateways – also hot. Cloud interface, through protocols like MQTT, is handled by the processors. The gateway supports connection to edge devices using wireless and wired protocols including Bluetooth, ZigBee, Wi-Fi and USB.
“Face detection for building security, device access and identifying evil-doers is also growing fast. The third reference design is targeted at this application, using similar capabilities to those on the ADAS board, but here managing real-time streaming video as 1280x720 at 30 frames per second, from an HDR-CMOS image sensor.”
The article contains a photo of the Aldec TySOM-2 Embedded Prototyping Board, which is based on a Xilinx Zynq Z-7045 SoC. According to Murphy, Aldec developed the reference designs using its own and other design tools including the Aldec Riviera-PRO simulator and QEMU. (For more information about the Zynq-specific QEMU processor emulator, see “The Xilinx version of QEMU handles ARM Cortex-A53, Cortex-R5, Cortex-A9, and MicroBlaze.”)
Then Murphy wrote this:
“So yes, Aldec put together a solution combining their simulator with QEMU emulation and perhaps that wouldn’t justify a technical paper in DVCon. But business-wise they look like they are starting on a much bigger path. They’re enabling FPGA-based system prototype and build in some of the hottest areas in systems today and they make these solutions affordable for design teams with much more constrained budgets than are available to the leaders in these fields.”
This week, EETimes’ Junko Yoshida published an article titled “Xilinx AI Engine Steers New Course” that gathers some comments from industry experts and from Xilinx with respect to Monday’s reVISION stack announcement. To recap, the Xilinx reVISION stack is a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference.
As Xilinx Senior Vice President of Corporate Strategy Steve Glaser tells Yoshida, “Xilinx designed the stack to ‘enable a much broader set of software and systems engineers, with little or no hardware design expertise to develop, intelligent vision guided systems easier and faster.’”
“While talking to customers who have already begun developing machine-learning technologies, Xilinx identified ‘8 bit and below fixed point precision’ as the key to significantly improve efficiency in machine-learning inference systems.”
Yoshida also interviewed Karl Freund, Senior Analyst for HPC and Deep Learning at Moor Insights & Strategy, who said:
“Artificial Intelligence remains in its infancy, and rapid change is the only constant.” In this circumstance, Xilinx seeks “to ease the programming burden to enable designers to accelerate their applications as they experiment and deploy the best solutions as rapidly as possible in a highly competitive industry.”
She also quotes Loring Wirbel, a Senior Analyst at The Linley group, who said:
“What’s interesting in Xilinx's software offering, [is that] this builds upon the original stack for cloud-based unsupervised inference, Reconfigurable Acceleration Stack, and expands inference capabilities to the network edge and embedded applications. One might say they took a backward approach versus the rest of the industry. But I see machine-learning product developers going a variety of directions in trained and inference subsystems. At this point, there's no right way or wrong way.”
There’s a lot more information in the EETimes article, so you might want to take a look for yourself.
As part of today’s reVISION announcement of a new, comprehensive development stack for embedded-vision applications, Xilinx has produced a 3-minute video showing you just some of the things made possible by this announcement.
Here it is:
By Adam Taylor
Several times in this series, we have looked at image processing using the Avnet EVK and the ZedBoard. Along with the basics, we have examined object tracking using OpenCV running on the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PS (processing system) and using HLS with its video library to generate image-processing algorithms for the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL (programmable logic, see blogs 140 to 148 here).
Xilinx’s reVision is an embedded-vision development stack that provides support for a wide range of frameworks and libraries often used for embedded-vision applications. Most exciting, from my point of view, is that the stack includes acceleration-ready OpenCV functions.
The stack itself is split into three layers. Once we select or define our platform, we will be mostly working at the application and algorithm layers. Let’s take a quick look at the layers of the stack:
As I mentioned above one of the most exciting aspects of the reVISION stack is the ability to accelerate a wide range of OpenCV functions using the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL. We can group the OpenCV functions that can be hardware-accelerated using the PL into four categories:
What is very interesting with these function calls is that we can optimize them for resource usage or performance within the PL. The main optimization method is specifying the number of pixels to be processed during each clock cycle. For most accelerated functions, we can choose to process either one or eight pixels. Processing more pixels per clock cycle reduces latency but increases resource utilization. Processing one pixel per clock minimizes the resource requirements at the cost of increased latency. We control the number of pixels processed per clock in via the function call.
Over the next few blogs, we will look more at the reVision stack and how we can use it. However in the best Blue Peter tradition, the image below shows the result of running a reVision Harris OpenCV acceleration function within the PL when accelerated.
Accelerated Harris Corner Detection in the PL
Code is available on Github as always.
If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.
Today, Xilinx announced a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference. It’s called the reVISION stack and it allows design teams without deep hardware expertise to use a software-defined development flow to combine efficient machine-learning and computer-vision algorithms with Xilinx All Programmable devices to create highly responsive systems. (Details here.)
The Xilinx reVISION stack includes a broad range of development resources for platform, algorithm, and application development including support for the most popular neural networks: AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN. Additionally, the stack provides library elements such as pre-defined and optimized implementations for CNN network layers, which are required to build custom neural networks (DNNs and CNNs). The machine-learning elements are complemented by a broad set of acceleration-ready OpenCV functions for computer-vision processing.
For application-level development, Xilinx supports industry-standard frameworks including Caffe for machine learning and OpenVX for computer vision. The reVISION stack also includes development platforms from Xilinx and third parties, which support various sensor types.
The reVISION development flow starts with a familiar, Eclipse-based development environment; the C, C++, and/or OpenCL programming languages; and associated compilers all incorporated into the Xilinx SDSoC development environment. You can now target reVISION hardware platforms within the SDSoC environment, drawing from a pool of acceleration-ready, computer-vision libraries to quickly build your application. Soon, you’ll also be able to use the Khronos Group’s OpenVX framework as well.
For machine learning, you can use popular frameworks including Caffe to train neural networks. Within one Xilinx Zynq SoC or Zynq UltraScale+ MPSoC, you can use Caffe-generated .prototxt files to configure a software scheduler running on one of the device’s ARM processors to drive CNN inference accelerators—pre-optimized for and instantiated in programmable logic. For computer vision and other algorithms, you can profile your code, identify bottlenecks, and then designate specific functions that need to be hardware-accelerated. The Xilinx system-optimizing compiler then creates an accelerated implementation of your code, automatically including the required processor/accelerator interfaces (data movers) and software drivers.
The Xilinx reVISION stack is the latest in an evolutionary line of development tools for creating embedded-vision systems. Xilinx All Programmable devices have long been used to develop such vision-based systems because these devices can interface to any image sensor and connect to any network—which Xilinx calls any-to-any connectivity—and they provide the large amounts of high-performance processing horsepower that vision systems require.
Initially, embedded-vision developers used the existing Xilinx Verilog and VHDL tools to develop these systems. Xilinx introduced the SDSoC development environment for HLL-based design two years ago and, since then, SDSoC has dramatically and successfully shorted development cycles for thousands of design teams. Xilinx’s new reVISION stack now enables an even broader set of software and systems engineers to develop intelligent, highly responsive embedded-vision systems faster and more easily using Xilinx All Programmable devices.
And what about the performance of the resulting embedded-vision systems? How do their performance metrics compare against against systems based on embedded GPUs or the typical SoCs used in these applications? Xilinx-based systems significantly outperform the best of this group, which employ Nvidia devices. Benchmarks of the reVISION flow using Zynq SoC targets against Nvidia Tegra X1 have shown as much as:
There is huge value to having a very rapid and deterministic system-response time and, for many systems, the faster response time of a design that's been accelerated using programmable logic can mean the difference between success and catastrophic failure. For example, the figure below shows the difference in response time between a car’s vision-guided braking system created with the Xilinx reVISION stack running on a Zynq UltraScale+ MPSoC relative to a similar system based on an Nvidia Tegra device. At 65mph, the Xilinx embedded-vision system’s response time stops the vehicle 5 to 33 feet faster depending on how the Nvidia-based system is implemented. Five to 33 feet could easily mean the difference between a safe stop and a collision.
(Note: This example appears in the new Xilinx reVISION backgrounder.)
The last two years have generated more machine-learning technology than all of the advancements over the previous 45 years and that pace isn't slowing down. Many new types of neural networks for vision-guided systems have emerged along with new techniques that make deployment of these neural networks much more efficient. No matter what you develop today or implement tomorrow, the hardware and I/O reconfigurability and software programmability of Xilinx All Programmable devices can “future-proof” your designs whether it’s to permit the implementation of new algorithms in existing hardware; to interface to new, improved sensing technology; or to add an all-new sensor type (like LIDAR or Time-of-Flight sensors, for example) to improve a vision-based system’s safety and reliability through advanced sensor fusion.
Xilinx is pushing even further into vision-guided, machine-learning applications with the new Xilinx reVISION Stack and this announcement complements the recently announced Reconfigurable Acceleration Stack for cloud-based systems. (See “Xilinx Reconfigurable Acceleration Stack speeds programming of machine learning, data analytics, video-streaming apps.”) Together, these new development resources significantly broaden your ability to deploy machine-learning applications using Xilinx technology—from inside the cloud to the very edge.
You might also want to read “Xilinx AI Engines Steers New Course” by Junko Yoshida on the EETimes.com site.
On Thursday, March 30, two member companies from the IIConsortium (Industrial Internet Consortium)—Cisco and Xilinx—are presenting a free, 1-hour Webinar titled “How the IIoT (Industrial Internet of Things) Makes Critical Data Available When & Where it is Needed.” The discussion will cover machine learning and how self-optimization plays a pivotal role in enhancing factory intelligence. Other IIoT topics covered in the Webinar include TSN (time-sensitive networking), real-time control, and high-performance node synchronization. The Webinar will be presented by Paul Didier, the Manufacturing Solution Architect for the IoT SW Group at Cisco Systems, and Dan Isaacs, Director of Connected Systems at Xilinx.
Last month, the European AXIOM Project took delivery of its first board based on a Xilinx Zynq UltraScale+ ZU9EG MPSoC. (See “The AXIOM Board has arrived!”) The AXIOM project (Agile, eXtensible, fast I/O Module) aims at researching new software/hardware architectures for Cyber-Physical Systems (CPS).
AXIOM Project Board based on Xilinx Zynq UltraScale+ MPSoC
The board in fact presents the pinout of an Arduino Uno so you can attach an Arduino Uno-compatible shield to the board. The presence of the Arduino UNO pinout enables fast prototyping and exposes the FPGA I/O pins in a user-friendly manner.
Here are the board specs:
You can see the AXIOM board for the first time during next week’s Embedded World 2017 at the SECO UDOO Booth, at the SECO booth, and at the EVIDENCE booth.
Please contact the AXIOM Project for more information.
With a month left in the Indiegogo funding period, the MATRIX Voice open-source voice platform campaign stands at 289% of its modest $5000 funding goal. MATRIX Voice is the third crowdfunding project by MATRIX Labs, based on Miami, Florida. The MATRIX Voice platform is a 3.14-inch circular circuit board capable of continuous voice recognition and compatible with the latest voice-based, cognitive cloud-based services including Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai, and Houndify. The MATRIX Voice board, based on a Xilinx Spartan-6 LX4 FPGA, is designed to plug directly onto a low-cost Raspberry Pi single-board computer or it can be operated as a standalone board. You can get one of these boards, due to be shipped in May, for as little as $45—if you’re quick. (Already, 61 of the 230 early-bird special-price boards are pledged.)
Here’s a photo of the MATRIX Voice board:
This image of the top of the MATRIX Voice board shows the locations for the seven rear-mounted MEMS microphones, seven RGB LEDs, and the Spartan-6 FPGA. The bottom of the board includes a 64Mbit SDRAM and a connector for the Raspberry Pi board.
Because this is the latest in a series of developer boards from MATRIX Labs (see last year’s project: “$99 FPGA-based Vision and Sensor Hub Dev Board for Raspberry Pi on Indiegogo—but only for the next two days!”), there’s already a sophisticated, layered software stack for the MATRIX Voice platform that include a HAL (Hardware Abstraction Layer) with the FPGA code and C++ library, an intermediate layer with a streaming interface for the sensors and vision libraries (for the Raspberry Pi camera), and a top layer with the MATRIX OS and high-level APIs. Here’s a diagram of the software stack:
And now, who better to describe this project than the originators:
An article in the new January, 2017 issue of the IIC (Industrial Internet Consortium’s) Journal of Innovation titled “Making Factories Smarter Through Machine Learning” discusses the networked use of SoC-e’s CPPS-Gate40 intelligent IIoT gateway to help a car-parts manufacturer keep the CNC machines on its production lines up and running through predictive maintenance directed by machine-learning algorithms. These algorithms use real-time operational data taken directly from sensors on the CNC machines to identify and learn normal behavior patterns during the machining process so that when variances signaling an imminent failure occur, systems can be shut down gracefully and maintained or repaired before the failure becomes truly catastrophic (and really, really expensive thanks to any uncontrolled release of the kinetic energy stored as angular momentum in an operating CNC machine).
Catastrophic CNC machine failures can shut down a production line, causing losses worth hundreds of thousands of dollars (or more) in physical damage to tools and to work in process, in addition to the costs associated with lost production time. In one example cited in the article, a bearing in a CNC machine started to fail, as indicated by a large vibration spike. At that point, only the bearing needed replacement. Four days later, the bearing failed catastrophically damaging nearby parts and idling the production line for three shifts. There was plenty of warning (see image below) and preventative maintenance at the first indication of a problem would have minimized the cost of this single failed bearing.
Unfortunately, the data predicting the failure had been captured but not analyzed until afterwards because there was no real-time data collection-and-analysis system in place. What a needless waste.
The network based on SoC-e’s CPPS-Gate40 intelligent IIoT gateway discussed in this IIC Journal of Innovation article is designed to collect and analyze real-time operational information from the CNC machines including operating temperature and vibration data. This system performs significant data reduction at the gateway to minimize the amount of data feeding the machine-learning algorithms. For example, FFT processing shrinks the time-domain vibration data down to just a frequency and an amplitude, resulting in significant local data reduction. Temperature data varies more slowly and so it is sampled at a much lower frequency—variable-rate collection and fusion for different sensor data is another significant feature of this system. The full system then trains on the data collected by the networked IIoT gateways.
This is a simple and graphic example of the sort of return that companies can expect from properly implemented IIoT systems with the performance needed to operate real-time manufacturing systems.
SoC-e’s CPPS-Gate40 is based on a Xilinx Zynq SoC, which implements a variety of IIoT-specific, hard-real-time functions developed by SoC-e as IP cores for the Zynq SoC's programmable logic including the HSR/PRP/Ethernet switch (HPS), IEEE 1588-2008 Precision Time Protocol (see “IEEE 1588-2008 clock synchronization IP core for Zynq SoCs has sub-μsec resolution”), and real-time sensor data acquisition and fusion. SoC-e also uses the Zynq SoC to implement a variety of network security protocols. These are the sort of functions that require the flexibility of the Zynq SoC’s integrated programmable logic. Software-based implementations of these functions are simply impractical due to performance requirements.
For more information about the SoC-e IIoT Gateway, see “Intelligent Gateways Make a Factory Smarter” and “Big Data Analytics + High Availability Network = IIoT: SoC-e demo at Embedded World 2016.”
An Industrial Ethernet (IIoT) power supply reference design for the Xilinx Zynq-7000 SoC developed by Monolithic Power Systems (MPS) combines small footprint (0.45in2 of board real estate) with good efficiency (78% from a 12V input) and tight regulation. The design consists of six MPS regulators: three MPM3630 3A buck regulators, one MPM3610 1A buck regulator, and two LDO regulators to supply the twelve power rails needed by the Zynq SoC.
Here’s a simple block diagram of MPS’ reference design:
IIoT power supply reference design for the Zynq SoC from Monolithic Power Systems
And here’s a close-up photo of MPS’ compact IIoT power supply design prototype:
For more information about the MPS power supply reference including a BOM and data sheets for the various regulators used in the design, click here.
Late last year at the SPS Drives show in Germany, BE.services demonstrated a vertically integrated solution stack for industrial controls running on a Zynq-based Xilinx ZC702 Eval Kit. The BE.services industrial automation stack for Industry 4.0 and IIoT applications includes:
This stack delivers the four critical elements you need when developing smart manufacturing controllers:
Here’s a 3-minute demo of that system with explanations:
The fundamental advantage to using a Xilinx Zynq SoC with its on-chip FPGA array for this sort of application is deterministic response, explains Dimitri Philippe in the video. Philippe is the founder and CEO of BE.services. Programmable logic delivers this response with hardware-level latencies instead of software latencies that are orders of magnitude slower.
Note: You can watch a recent Xilinx Webinar on the use of OPC UA and TSN for IIoT applications by clicking here.
The IIC (Industrial Internet Consortium) announced its first security assessment-focused testbed for Industrial IoT (IIoT) systems, the Security Claims Evaluation Testbed, in February 2016. This testbed provides an open, highly configurable cybersecurity platform for evaluating the security capabilities of endpoints, gateways, and other networked components. Data sources to the testbed can include industrial, automotive, medical, and other related endpoints.
IIC member companies have developed a common security framework and an approach to assessing cybersecurity in IIoT systems: the Industrial Internet Security Framework (IISF). The IIC’s Security Claims Testbed helps manufacturers improve the security posture of their products and verify alignment to the IISF prior to product launch, which helps shorten time to market.
If you’d like to hear about these topics in more detail, the IIC is presenting a free 1-hour Webinar on January 26. (Available later, on demand.) Register here.
Here’s a graphic depicting the IIC’s Security Claims Evaluation Testbed:
Note: Xilinx is one of the lead member companies involved in the development of the IIC Security Claims Testbed—others include Aicas, GlobalSign, Infineon, Real-Time Innovations, and UL (Underwriters Laboratories)—and if you look at the above graphic, you’ll see the SoC-e Gateway in the middle of everything. This gateway is based on a Xilinx Zynq SoC. For more information about the SoC-e IIoT Gateway, see “Intelligent Gateways Make a Factory Smarter” and “Big Data Analytics + High Availability Network = IIoT: SoC-e demo at Embedded World 2016.”
Yesterday, National Instruments (NI) along with 15 partners announced the grand opening of the new NI Industrial IoT Lab located at NI’s headquarters in Austin, TX. The lab is a working showcase for Industrial IoT technologies, solutions and systems architectures and will address challenges including interoperability and security in the IIoT space. The partner companies working with NI on the lab include:
NI’s Jamie Smith (on the left), NI’s Business and Technology Director, opens the new NI Industrial IoT Lab in Austin, TX
A recent issue of RTC magazine carried an interview with Dan Isaacs, Director of Connected Systems at Xilinx. There are many cogent observations about the Industrial Internet of Things (IIoT) in this interview including the three key challenges to widespread IIoT adoption that Dan sees:
Where’s the path to meet these challenges?
Quoting Dan from the RTC Magazine article:
“The Industrial Internet Consortium (IIC), a highly collaborative 200+ member strong global consortium of companies, is working on several approaches through reference architectures, security frameworks, and proof of concept test beds to identify and bring innovative methodologies and solutions to address these and other IIoT challenges.”
For more Dan Isaacs IIoT insights like this, see the full interview on page 6 of the magazine.
Ask any expert in IIoT (Industrial Internet of Things) circles what the most pressing IIoT problem might be and you will undoubtedly hear “security.” Internet hacking stories are rampant. You generally hear about one a day. With the IoT and IIoT ramping up, they’re going to get more frequent. Over the recent Thanksgiving Weekend, the ticket machines and fare-collection system of San Francisco’s Muni light-rail, mass-transit system was hacked by ransomware. Agents' computer screens displayed the message "You Hacked, ALL Data Encrypted" beginning Friday night. The attackers demanded 100 Bitcoins, worth about $73,000, to undo the damage. Things were restored by Sunday without paying the ransom and Muni provided free rides until the system could be recovered.
You do not want to let this happen to your IIoT system design.
How to prevent it? Today (by sheer coincidence, honest), Avnet announced a new security module for its MicroZed IIoT (Industrial Internet of Things Starter Platform), which is based on a Xilinx Zynq Z-7000 SoC. The new Avnet Trusted Platform Module Security PMOD places an Infineon OPTIGA TPM (Trusted Platform Module) SLB9670 on a very small plug-in board conforming to the Digilent PMOD peripheral module format. The Infineon TPM SLB9670 is a secure microprocessor that adds hardware security to any system by conforming to the TPM security standard developed by the Trusted Computing Group, an international industry standardization group.
The $29.95 Avnet Trusted Platform Module Security PMOD is essentially a SPI security peripheral that provides many security services to your design based on Trusted computing Group standards. Provided services include:
That’s a lot of security in a 32-pin package and, for development purposes, you can get it on a $30 plug-in PMOD along with a reference design for using the module with the Zynq-based Avnet MicroZed IIoT Starter Kit.
So if you don’t want to see this in your IIoT system:
Then think about buying this:
Avnet MicroZed IIoT TPM PMOD
A great new blog post on the ELMG Web site discusses three entry-level dev boards you can use to learn about controlling power electronics with FPGAs. (This post follows a Part 1 post that discusses the software you can use—namely Xilinx Vivado HLS and SDSoC—to develop power-control FPGA designs.)
And what are those three boards? They should be familiar to any Xcell Daily reader:
Who is ELMG? They’ve spent the last 25 years developing digitally controlled power converters in motor drives, industrial switch mode power supplies, reactive power compensation, medium voltage system, power quality systems, motor starters, appliances and telecom switch-mode power supplies.
For more information about the ARTY board, see: ARTY—the $99 Artix-7 FPGA Dev Board/Eval Kit with Arduino I/O and $3K worth of Vivado software. Wait, What????
For more information about the MicroZed and the ZedBoard, see the 150+ blog posts in Adam Taylor’s MicroZed Chronicles.
MATRIX Labs bills the $99 MATRIX Creator dev board for the Raspberry Pi, listed on Indiegogo, as a “hardware bombshell.” A more precise description would be and FPGA-accelerated sensor hub sporting a massive array of on-board sensors. It’s a one stop shop for prototyping IoT and industrial IoT (IIoT) devices using the Raspberry Pi board as a base.
Here’s a top-and-bottom photo of the board:
MATRIX Creator dev board for the Raspberry Pi
Note: That square hole in the center of the board allows the Raspberry Pi’s Camera Module to peek through.
Here’s a detailed list of the various components on the MATRIX Creator board and its capabilities:
MATRIX Creator dev board Components and Capabilities
Note that one of those components is a Xilinx Spartan-6 LX4 FPGA, which makes a very fine low-cost sensor hub capable of operating in real time. No doubt you’d like to see how the FPGA fits into this board. MATRIX Labs has that covered with this block diagram:
MATRIX Creator dev board Block Diagram
MATRIX Labs has also developed supporting tools and software for the MATRIX Creator dev board including MATRIX OS, MATRIX CV (a computer-vision library), and MATRIX CLI (a sensor-hub application). More software is already being developed.
Unlike many crowd-funded projects, MATRIX Creator is already shipping so you’re assured of getting a board, according to MATRIX Labs, but you only have two days left in the funding period. So check it out now.
A novel development by Zhao Tian, Kevin Wright, and Xia Zhou at Dartmouth College encodes data streams on sparse, ultrashort pulses and transmits these pulses optically using low-cost, visible LED luminaires designed for room illumination. (See “The DarkLight Rises: Visible Light Communication in the Dark”) The optical light pulses are hundreds of nanoseconds long so they’re far too short—by four or five orders of magnitude—to be perceived as light by human vision. However, they’re long enough to be captured by an inexpensive photodiode and are therefore useful for digital communications—albeit slow communications, on the order of 1.6 to 1.8 kbits/sec. Nevertheless, that rate meets a number of low-speed communications needs including many IoT and industrial IoT requirements.
Digilent Basys 3 Artix-7 FPGA Trainer Board
Here’s a short 1-minute video giving you an ultrashort overview of the project:
If you’re developing products in the IIoT (Industrial Internet of things) space, you’ll likely want to sign up for a free 1-hour Webinar that Xilinx’s Director of Strategic Marketing for Industrial IoT Dan Isaacs will present on October 12 in conjunction with Automation World. Dan will be discussing the history of the IIoT starting with the old way of doing things using islands of automation to today, where everything in the factory delivers data via a network to a central repository for analysis, action, and optimization.
Xcell Daily has covered several Photonfocus industrial video cameras and all of them have been based on a Xilinx Spartan-6 FPGA vision platform developed by Photonfocus that serves as a base for numerous industrial video cameras. One of the advantages that the Spartan-6 FPGA provides is the ability to adapt to nearly any sort of imaging sensor—monochrome, color, or hyperspectral—through reprogramming of the interface I/O and the on-board video processing. Another advantage the FPGA provides is the ability to go fast in video-processing applications.
Photonfocus used this latter capability to develop the DR1 family of double-rate GigE video cameras a while ago (the company recently announced quad-rate QR1 GigE industrial cameras based on the same FPGA platform) and a new article written by Andrew Wilson in Vision Systems Design Magazine details the use of Photonfocus DR1 double-rate cameras for a high-speed, visual-inspection system developed by M3C Industrial Automation and Vision. This system can inspect and sort 25,000 cork stoppers per hour using the Photonfocus DR1 camera’s extreme 1800 frames/sec capability.
The camera employed in this application is the Photonfocus DR1-D2048x1088C-192-G2-8 high-speed color camera. Used in line mode, the camera captures an 896x100-pixel region of interest (ROI) at an astounding 1800 frames/sec. The corks pass through two imaging stations that employ structured light generated by an Effilux LED 3D projector. The LED projector’s bright white line appears across the cork stopper within the camera’s ROI and the bright illumination ensures color fidelity. Color is an important sorting criterion.
The Photonfocus DR1 double-rate GigE camera family is based on the Xilinx Spartan-6 FPGA
Corks pass through two such inspection stations. The first imaging station performs a 3D analysis of the cork stoppers using an attached PC running software written in C++ that assembles 3D images from the captured frames as the corks rapidly pass by on a conveyer. Then the corks are flipped before passing through a second imaging station that inspects the stoppers’ reverse side. The system looks for defects in the stoppers including unacceptably large holes, superficial deformations, incorrect size, and color imperfections. The system uses this information to grade and sort the good stoppers and to discard rejects.
Numerous cork manufacturers across Europe have already adopted this high-speed inspection system.
Want to see the M3C Industrial Automation and Vision cork-inspection system in action? Thought so. Here’s Andrew Wilson’s video:
The only way this system could be better would be for it to be inspecting chocolate-chip cookies, and I get the rejects.
Other Xcell Daily blog posts about Photonfocus video cameras based on the Spartan-6 FPGA include:
Although the Xilinx Spartan-6 FPGA family is now more than half a decade old, it continues to demonstrate real value as a cost-effective foundation for many new video and vision platforms.
Analog Devices (ADI) introduced the AD9371 Integrated, Dual Wideband RF Transceiver back in May as part of its “RadioVerse.” You use the AD9371 for building extremely flexible, digital radios with operating frequencies of 300MHz to 6GHz, which covers most of the licensed and unlicensed cellular bands. The IC supports receiver bandwidths to 100MHz. It also supports observation receiver and transmit synthesis bandwidths to 250MHz, which you can use to implement digital correction algorithms.
Last week, the company started shipping FMC eval cards based on the AD9371: the ADRV9371-N/PCBZ and ADRV9371-W/PCBZ.
ADRV9371-N Eval Board for the Analog Devices AD9371 Integrated Wideband RF Transceiver
ADI was showing one of these new AD9371 Eval Boards in operation this week at the GNU Radio Conference held in Boulder, Colorado. The board was plugged into the FMC connector on a Xilinx ZC706 Eval Kit, which is based on a Xilinx Zynq Z7045 SoC. The Xilinx Zynq SoC and the AD9371 make an extremely powerful design combination for developing all sorts of SDRs (software-defined radios).
Here’s a large-scale problem literally too hot to handle:
Global steelmaker and mining company ArcelorMittal has a 24/7 production line that continuously welds steel sheets unwound from giant coils in real time. If any part of the continuous weld tears due to a weak spot, the line quickly shuts down. If the tear occurs in an oven, there’s a 4-day cool-down period before the problem can be addressed. The cost of that delay is about €1 million in lost production, repair, and downtime. Something best avoided even if it only happens once or twice a year.
ArcelorMittal asked V2i, a Belgian engineering company specializing in structural dynamics, to address this production problem. V2i developed a reliable, autonomous system that detects low-quality welds and quickly warns the operator so there’s sufficient time to restart the welding process before the weld tears.
The system needed to monitor several real-time parameters including steel hardness, thickness, welding speed, and welding current. In the final design, a non-contact profilometer measures the weld shape precisely and quickly; a pyrometer measures the weld bead’s temperature; and several more sensors monitor parameters including pressures, current, voltage, positions, and speed. V2i used a National Instruments (NI) CompactRIO cRIO-9038 controller with appropriate snap-in I/O cards to collect and condition all of the sensor signals.
The NI cRIO-9038 controller—programmed with a combination of NI's LabVIEW System Design Software, LabVIEW Real-Time, and LabVIEW FPGA—has sufficient processing power to handle the complex tasks of combining all of the real-time sensor data and producing a go/no-go status display while ongoing result data feeds ArcelorMittal’s online data base via Ethernet. Processing power inside of the cRIO-9038 is provided by a combination of a dual-core, 1.33GHz Intel Atom processor and a Xilinx Kintex-7 160T FPGA.
Here’s a video of the welder in action:
Note: This project won the 2016 Intel Internet of Things Award and was a finalist in the Industrial Machinery and Control category at this month’s NI Engineering Impact Awards during NI Week held in Austin, Texas. You can read more details in the full case study here.
Sagemcom Technology NRJ needed to increase demand-based production volume and improve first-pass yield on an energy meter manufacturing line in Tunisia while maintaining high product quality and 99% up time on the test line. These meters are used to create Smart Grids. The existing meter test system, based on a Windows PC, was not up to the job so Sagemcom Technology turned to the National Instruments (NI) CompactRIO cRIO-9030 controller programmed with NI’s LabVIEW System Design Software. The controller’s integrated Xilinx Kintex-7 70T FPGA, which provided the real-time control and high-speed image processing required by the improved production-test program, can be programmed with NI’s LabVIEW FPGA and FPGA-based LabVIEW Vision Development Module. The result: a 38% improvement in test time, 50% cost reduction for the controller, 17% improvement in productivity, and 70% LabVIEW code reuse. Those are truly excellent numbers.
Testing takes place in a 4-station robotic testing cell with two robots to move the meters from one test station to the next with each station conducting different tests including a functional test, a machine-vision test to verify proper operation of the energy meter’s LCD, a 3.2KV HiPot test to verify fault immunity, and a test of the meter’s power-line communications capability. The NI CompactRIO controls the robot arm, runs the tests, and reports the results to a server via Ethernet.
Here’s a nicely done, minute-long video of the robotic test system in action:
Note: This project won in the Industrial Machinery and Control category at this month’s NI Engineering Impact Awards during NI Week held in Austin, Texas. You can read more details in the full case study here.
A few days ago at NI Week in Austin, National Instruments showed a new member of its CompactRIO platform with integrated WiFi (dual-band, 802.11 a/b/g/n) to support wireless data acquisition and control. (Think IIoT.) The new box is the 8-slot cRIO-9037 and it combines WiFi capability with a 1.33GHz dual-core Arom processor and a Xilinx Kintex-7 160T FPGA. Of course, it’s compatible with NI’s LabVIEW System Design Software.
National Instruments cRIO-9037 with integrated WiFi
“In the future everything will be attacked. How does one protect a device with limited resources against hardware, firmware, configuration and data hacks during the whole life cycle?”
PFP’s cybersecurity technology takes fine-grained measurements of a processor’s power consumption and detects anomalies using base references from trusted software sources, machine learning, and data analytics. Because it only monitors power consumption, it’s impossible for intruders to detect its presence.
The original demo of this technology appeared in the Xilinx booth running on a platform based on a Xilinx Zynq SoC at last year’s ARM TechCon. (See “Zynq-based PFP eMonitor brings power-based security monitoring to embedded systems.”)
Here’s a 1-minute video with a very brief overview of PFP’s technology:
This week’s announcement by PFP and Wistron makes PFP’s cybersecurity technology available to Wistron’s customers. Wistron is an ODM (original device manufacturer); it was originally Acer’s manufacturing arm in Taiwan but has been an independent company since the year 2000 with operations in Asia, Europe, and North America. The company currently develops and manufactures a range of electronic products including notebook PCs, desktop PCs, servers and storage systems, LCD TVs, information appliances, handheld devices, networking and communication products, and IoT devices for a variety of clients. Wistron's customers can outsource some or all of their product development tasks and this week’s announcement allows Wistron to incorporate PFP’s cybersecurity technology into new designs.