UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

 

Although humans once served as the final inspectors for pcbs, today’s component dimensions and manufacturing volumes mandate the use of camera-based automated optical inspection (AOI) systems. Amfax has developed a 3D AOI system—the a3Di—that uses two lasers to make millions of 3D measurements with better than 3μm accuracy. One of the company’s customers uses an a3Di system to inspect 18,000 assembled pcbs per day.

 

The a3Di control system is based on a National Instruments (NI) cRIO-9075 CompactRIO controller—with an integrated Xilinx Virtex-5 LX25 FPGA—programmed with NI’s LabVIEW systems engineering software. The controller manages all aspects of the a3Di AOI system including monitoring and control of:

 

 

  • Machine motors
  • Control switches
  • Optical position sensors
  • Inverters
  • Up and downstream SMEMA (Surface Mount Equipment Manufacturers Association) conveyor control
  • Light tower
  • Pneumatics
  • Operator manual controls for width PCB control
  • System emergency stop

 

 

The system provides height-graded images like this:

 

 

 

Amfax 3D PCB image.jpg 

 

3D Image of a3Di’s Measurement Data: Colors represent height, with Z resolution down to less than a micron. The blue section at the top indicates signs of board warp. Laser etched component information appears on some of the ICs.

 

 

 

The a3Di system then compares this image against a stored golden reference image to detect manufacturing defects.

 

Amfax says that it has found the CompactRIO system to be “CompactRIO system has proven to be a dependable, reliable, and cost-effective.” In addition, the company found it could get far better timing resolution with the CompactRIO system than the 1msec resolution usually provided by PLC controllers.

 

 

This project was a 2017 NI Engineering Impact Award Finalist in the Electronics and Semiconductor category last month at NI Week. It is documented in this NI case study.

 

HIL simulator based on NI equipment allows Hyundai to cut marine diesel development from 3 years to one

by Xilinx Employee ‎06-13-2017 12:09 PM - edited ‎06-13-2017 12:11 PM (1,324 Views)

 

Hyundai Heavy Industries (HHI) is the world’s foremost shipbuilding company and the company’s Engine and Machinery Division (HHI-EMD) is the world’s largest marine diesel engine builder. HHI’s HiMSEN medium-sized engines are four-stroke diesels with output power ranging from 960kW to 25MW. These engines power electric generators on large ships and serve as the propulsion engine on medium and small ships. HHI-EMD is always developing newer, more fuel-efficient engines because the fuel costs for these large diesels runs about $2000/hour. Better fuel efficiency will significantly reduce operating costs and emissions.

 

For that research, HHI-EMD developed monitoring and diagnostic equipment to better understand engine combustion performance and an HIL system to test new engine controller designs. The test and HIL systems are based on equipment from National Instruments (NI).

 

Engine instrumentation must be able to monitor 10-cylinder engines running at thousands of RPM while measuring crankshaft angle to 0.1 degree of resolution. From that information, the engine test and monitoring system calculates in-cylinder peak pressure, mean effective pressure, and cycle-to-cycle pressure variation. All this must happen every 10 μ sec for each cylinder.

 

HHI-EMD elected to use an NI cRIO-9035 Controller, which incorporates a Xilinx Kintex-7 70T FPGA, to serve as the platform for developing its HiCAS test and data-acquisition system. The HiCAS system monitors all aspects of the engine under test including engine speed, in-cylinder pressure, and pressures in the intake and exhaust systems. This data helped HHI-EMD engineers analyze the engine’s overall performance and the performance of key parts using thermodynamic analysis. HiCAS provides real-time analysis of dynamic data including:

 

  • In-cylinder peak pressure
  • Indicated mean effective pressure and cycle-to-cycle variation
  • Cylinder-to-cylinder distribution
  • Cyclic moving parts fault diagnosis

 

Using the collected data, the engineering team then developed a model of the diesel engine, resulting in the development of an HMI system used to exercise the engine controllers. This engine model runs in real time on an NI PXI system synchronized with the high-speed signal-sensor simulation software running on the PXI system’s multifunction FPGA-based FlexRIO module. The HMI system transmits signals to the engine controllers, simulating an operating engine and eliminating the operating costs of a large diesel engine during these tests. HHI-EMD credits the FPGAs in these systems for making the calculations run fast enough for real-time simulation. The simulated engine also permits fault testing without the risk of damaging an actual engine. Of course, all of this is programmed using NI’s LabVIEW systems engineering software and LabVIEW FPGA.

 

 

 

 

HHI-EMD HIL Simulator for Marine Diesel Engines.jpg 

 

 

HHI-EMD HIL Simulator for Marine Diesel Engines

 

 

 

According to HHI-EMD, development of the HiCAS engine-monitoring system and virtual verification based on the HIL system shortened development time from more than three years to one, significantly accelerating the time-to-market for HHI-EMD’s more eco-friendly marine diesel engines.

 

 

 

This project was a 2017 NI Engineering Impact Award Finalist in the Transportation and Heavy Equipment category last month at NI Week and won the 2017 HPE Edgeline Big Analog Data Award. It is documented in this NI case study.

 

 

 

 

 

 

 

 

 

Many engineers in Canada wear the Iron Ring on their finger, presented to engineering graduates as a symbolic, daily reminder that they have an obligation not to design structures or other artifacts that fail catastrophically. (Legend has it that the iron in the ring comes from the first Quebec Bridge—which collapsed during its construction in 1907—but the legend appears to be untrue.) All engineers, whether wearing the Canadian Iron Ring or not, feel an obligation to develop products that do not fail dangerously. For buildings and other civil engineering works, that usually means designing structures with healthy design margins even for worst-case projected loading. However, many structures encounter worst-case loads infrequently or never. For example, a sports stadium experiences maximum loading for perhaps 20 or 30 days per year, for only a few hours at a time when it fills with sports fans. The rest of the time, the building is empty and the materials used to ensure that the structure can handle those loads are not needed to maintain structural integrity.

 

The total energy consumed by a structure over its lifetime is a combination of the energy needed to mine and fabricate the building materials and to build the structure (embodied energy) and the energy needed to operate the building (operational energy). The resulting energy curve looks something like this:

 

 

 

Embodied versus Operational Energy for a Structure.jpg
 

 

 

For completely passive structures, which describes most structures built over the past several thousand years, embodied energy dominates the total consumed energy because structural members must be designed to bear the full design load at all times. Alternatively, a smart structure with actuators that stiffen the structure only when needed will require more operational energy but the total required embodied energy will be smaller. Looking at the above conceptual graph, a well-designed active-passive system minimizes the total required energy for the structure.

 

Active control has already been used in structure design, most widely for vibration control. During his doctorate work, Gennaro Senatore formulated a new methodology to design adaptive structures. His research project was a collaboration between the University College London and Expedition Engineering. As part of that project, Senatore built a large scale prototype of an active-passive structure at the University College London structures laboratory. The resulting prototype is a 6m cantilever spatial truss with a 37.5:1 span-to-depth ratio. Here’s a photo of the large-scale prototype truss:

 

 

Active-Passive Cantilever Truss.jpg
 

 

 

You can see the actuators just beneath the top surface of the truss. When the actuators are not energized, the cantilever truss flexes quite a lot with a load placed at the extreme end. However, this active system detects the load-induced flexion and compensates by energizing the actuators and stiffening the cantilever.

 

Here’s a photo showing the amount of flex induced by a 100kg load at the end of the cantilever without and with energized actuators:

 

 

 

Cantilever Flexion.jpg 

 

 

The top half of the image shows that the truss flexes 170mm under load when the actuators are not energized, but only 2mm when the system senses the load and energizes the linear actuators.

 

The truss incorporates ten linear electric actuators that stiffen the truss when sensors detect a load-induced deflection. The control system for this active-passive truss consists of a National Instruments (NI) CompactRIO cRIO-9024 controller, 45 strain-gage sensors, 10 actuators, and five driver boards (one for each actuator pair.) The NI cRIO-9024 controller pairs with a card cage that accepts I/O modules and incorporates a Virtex-5 FPGA for reconfigurable I/O. (That’s what the “RIO” in cRIO stands for.) In this application, the integral Virtex-5 FPGA also provides in-line processing for acquired and generated signals.

The system is programmed using NI’s LabVIEW systems engineering software.

 

A large structure would require many such subsystems, all communicating through a network. This is clearly one very useful way to employ the IIoT in structures.

 

 

This project was a 2017 NI Engineering Impact Award Finalist in the Industrial Machinery and Control category last month at NI Week. It is documented in this NI case study, which includes many more technical details and a short video showing the truss in action as a load is applied.

 

 

 

 

When someone asks where Xilinx All Programmable devices are used, I find it a hard question to answer because there’s such a very wide range of applications—as demonstrated by the thousands of Xcell Daily blog posts I’ve written over the past several years.

 

Now, there’s a 5-minute “Powered by Xilinx” video with clips from several companies using Xilinx devices for applications including:

 

  • Machine learning for manufacturing
  • Cloud acceleration
  • Autonomous cars, drones, and robots
  • Real-time 4K, UHD, and 8K video and image processing
  • VR and AR
  • High-speed networking by RF, LED-based free-air optics, and fiber
  • Cybersecurity for IIoT

 

That’s a huge range covered in just five minutes.

 

Here’s the video:

 

 

 

 

 

Last week in Austin on the NI Week exhibit floor, you could see a pair of slot cars racing around a moderately sized track while avoiding obstacles, with real-time position sensing and control managed by TSN-enabled National Instruments (NI) cRIO-9035 8-slot CompactRIO controllers communicating through a Cisco IE-4000 Series Managed Industrial Ethernet Switch‎. (TSN is “Time Sensitive Networking,” the IEEE Ethernet standard for deterministic packet transmission and handling.) NI’s cRIO-9035 CompactRIO controllers pair an Intel Atom 32-bit processor with a Xilinx Kintex-7 FPGA to implement highly responsive, real-time control obtainable only through the speed of an FPGA’s programmable hardware.

 

Here’s a video of the TSN slot car system in action with a clear (and graphic) explanation of TSN’s advantages in the real world:

 

 

 

Can we talk? About security? You know that it’s a dangerous world out there. For a variety of reasons, bad actors want to steal your data, or steal your customers’ data, or disrupt operations. Your job is not only to design something that works; these days, you also need to design equipment that resists hacking and tampering. PFP Cybersecurity provides IP that helps you create systems that have robust defenses against such exploits.

 

“PFP” stands for “power fingerprinting,” which combines AI and analog power analysis to create high-speed, next-generation cyber protection that can detect tampering in milliseconds instead of days, weeks, or months. It does this by observing the tiny changes to a system’s power consumption during normal operation, learning what’s normal, and then monitoring power consumption to detect an abnormal situation that might signal tampering.

 

The 3-minute video below discusses these aspects of PFP Cybersecurity’s IP and also discusses why the Xilinx Zynq SoC and Zynq UltraScale+ MPSoC are a perfect fit for this security IP. The Zynq device families can all perform high-speed signal processing, have built-in analog conversion circuitry for measuring voltage and current, and can implement high-performance machine-learning algorithms for analyzing power usage.

 

Originally, PFP Cybersecurity designed a monitoring system based on the Zynq SoC for monitoring other systems but, as the video discusses, if the system is already based on a Zynq device, it can monitor itself and return itself to a known good state if tampering is suspected.

 

Here’s the video:

 

 

 

 

 

Note: For more information about PFP Cybersecurity, see “Zynq-based PFP eMonitor brings power-based security monitoring to embedded systems.”

 

 

Plethora IIoT develops cutting‑edge solutions to Industry 4.0 challenges using machine learning, machine vision, and sensor fusion. In the video below, a Plethora IIoT Oberon system monitors power consumption, temperature, and the angular speed of three positioning servomotors in real time on a large ETXE-TAR Machining Center for predictive maintenance—to spot anomalies with the machine tool and to schedule maintenance before these anomalies become full-blown faults that shut down the production line. (It’s really expensive when that happens.) The ETXE-TAR Machining Center is center-boring engine crankshafts. This bore is the critical link between a car’s engine and the rest of the drive train including the transmission.

 

 

 

Plethora IIoT Oberon System.jpg 

 

 

 

Plethora uses Xilinx Zynq SoCs and Zynq UltraScale+ MPSoCs as the heart of its Oberon system because these devices’ unique combination of software-programmable processors, hardware-programmable FPGA fabric, and programmable I/O allow the company to develop real-time systems that implement sensor fusion, machine vision, and machine learning in one device.

 

Initially, Plethora IIoT’s engineers used the Xilinx Vivado Design Suite to develop their Zynq-based designs. Then they discovered Vivado HLS, which allows you to take algorithms in C, C++, or SystemC directly to the FPGA fabric using hardware compilation. The engineers’ first reaction to Vivado HLS: “Is this real or what?” They discovered that it was real. Then they tried the SDSoC Development Environment with its system-level profiling, automated software acceleration using programmable logic, automated system connectivity generation, and libraries to speed programming. As they say in the video, “You just have to program it and there you go.”

 

Here’s the video:

 

 

 

 

Plethora IIoT is showcasing its Oberon system in the Industrial Internet Consortium (IIC) Pavilion during the Hannover Messe Show being held this week. Several other demos in the IIC Pavilion are also based on Zynq All Programmable devices.

 

 

What do you do if you want to build a low-cost state-of-the-art, experimental SDR (software-defined radio) that’s compatible with GNURadio—the open-source development toolkit and ecosystem of choice for serious SDR research? You might want to do what Lukas Lao Beyer did. Start with the incredibly flexible, full-duplex Analog Devices AD9364 1x1 Agile RF Transceiver IC and then give it all the processing power it might need with an Artix-7 A50T FPGA. Connect these two devices on a meticulously laid out circuit board taking all RF-design rules into account and then write the appropriate drivers to fit into the GNURadio ecosystem.

 

Sounds like a lot of work, doesn’t it? It’s taken Lukas two years and four major design revisions to get to this point.

 

Well, you can circumvent all that work and get to the SDR research by signing up for a copy of Lukas’ FreeSRP board on the Crowd Supply crowd-funding site. The cost for one FreeSRP board and the required USB 3.0 cable is $420.

 

 

FreeSRP Board.jpg

 

Lukas Lao Beyer’s FreeSRP SDR board based on a Xilinx Artix-7 A50T FPGA

 

 

 

With 32 days left in the Crowd Supply funding campaign period, the project has raised pledges of a little more than $12,000. That’s about 16% of the way towards the goal.

 

There are a lot of well-known SDR boards available, so conveniently, the FreeSRP Crowd Supply page provides a comparison chart:

 

 

FreeSRP Comparison Chart.jpg 

 

 

If you really want to build your own, the documentation page is here. But if you want to start working with SDR, sign up and take delivery of a FreeSRP board this summer.

 

 

 

On April 11, the third, free Webinar in Xilinx's "Precise, Predictive, and Connected Industrial IoT" series will provide insight into the role of Zynq All Programmable SoCs in the breath of applications across IIoT Edge and the connectivity between them.  A brief summary of IIoT trends will be presented, followed by an overview of the Data Distribution Service (DDS) IIoT databus standard presented by RTI, the IIoT Connectivity Company, and how DDS and OPC-UA target different connectivity challenges in IIoT systems.

 

Webinar attendees will also learn:

 

  • The main benefits of data-centric communications using DDS including Reliability, Security, Real-Time Response, and Ease of Integration.

 

  • When and how to connect DDS systems to OPC-UA systems for the most intelligent and innovative distributed Industrial IoT and Industry 4.0 systems.

 

  • How Xilinx’s All Programmable Industrial Control System (APICS) integrates DDS and OPC-UA.

 

Register here.

How to use machine learning for embedded vision—and many other embedded applications

by Xilinx Employee ‎03-30-2017 10:02 AM - edited ‎03-30-2017 12:00 PM (3,577 Views)

 

Image3.jpg Adam Taylor and Xilinx’s Sr. Product Manager for SDSoC and Embedded Vision Nick Ni have just published an article on the EE News Europe Web site titled “Machine learning in embedded vision applications.” That title’s pretty self-explanatory, but there are a few points I’d like to highlight. Then you can go read the full article yourself.

 

As the article states, “Machine learning spans several industry mega trends, playing a very prominent role within not only Embedded Vision (EV), but also Industrial Internet of Things (IIoT) and Cloud Computing.” In other words, if you’re designing products for any embedded market, you might well find yourself at a competitive disadvantage if you’re not adding machine-learning features to your road map.

 

This article closely ties machine learning with neural networks (including Feed-forward Neural Networks (FNNs), Recurrent Neural Networks (RNNs), and Deep Neural Networks (DNNs), and Convolutional Neural Networks (CNNs)). Neural networks are not programmed; they’re trained. Then, if they’re part of an embedded design, they’re deployed. Training is usually done using floating-point neural-network implementations but, for efficiency (power and cost), deployed neural networks can use fixed-point representations with very little or no loss of accuracy. (See “Counter-Intuitive: Fixed-Point Deep-Learning Inference Delivers 2x to 6x Better CNN Performance with Great Accuracy.”)

 

The programmable logic inside of Xilinx FPGAs, Zynq SoCs, and Zynq UltraScale+ MPSoCs is especially good at implementing fixed-point neural networks, as described in this article by Nick Ni and Adam Taylor. (Go read the article!)

 

Meanwhile, this is a good time to remind you of the recent Xilinx introduction of the reVISION stack for neural network development using Xilinx All Programmable devices. For more information about the Xilinx reVISION stack, see:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In yesterday’s EETimes article titled “How will Ethernet go real-time for industrial networks?,” author Richard Wilson interviews National Instruments’ Global Technology and Marketing Director Rahman Jamal about using OPC-UA (the OPC Foundation’s Unified Architecture) and TSN (time-sensitive networking) to build industrial Ethernet networks (IIoT/Industrie 4.0) that deliver real-time response. (Yes, yes, yes, “real-time” is a loosely defined term where “real” depends on your system’s temporal reality.) As Jamal states in the interview, some constrained industrial Ethernet network topologies need no help to achieve real-time operation. In other cases and for other topologies, you need Ethernet implementations that are “heavily modified at the hardware level to achieve performance.”

 

One of the hardware additions that can really help is the hardware implementation of the IEEE 1588v2 PTP (Precision Time Protocol) clock-synchronization standard. PTP permits each piece of network-connected equipment to be synchronized using a 64-bit timer, which can be used for time-stamping, synchronization, control and as a common time reference to implement TSN.

 

PTP implementation is an ideal task for an IP block instantiated in programmable logic (see last year’s Xcell Daily blog post “Intelligent Gateways Make a Factory Smarter,” written by SoC-e (System on Chip engineering) founder and CEO Armando Astarloa). SoC-e has implemented just such an IEEE 1588v2 PTP IP core in a Xilinx Zynq SoC, which is the core logic device inside of the company’s CPPS-Gate40 Sensor intelligent IIoT gateway. (Note: Software PTP implementations are neither fast nor deterministic enough for many IIoT applications.)

 

 

SoC-e CPPS-Gate40 Sensor Gateway.jpg 

 

SoC-e CPPS-Gate40 Sensor intelligent IIoT gateway

 

 

 

You can see the SoC-e PTP IP core in the very center of this CPPS-Gate40 block diagram:

 

 

 

SoC-e CPPSGate40 block diagram.jpg

 

SoC-e CPPS-Gate40 Sensor intelligent IIoT gateway block diagram

 

 

 

According to the SoC-e Web page, the company’s IEEE 1588v2 IP core in the CPPS-Gate40 Sensor gateway can deliver sub-microsecond network synchronization. How is such a small number possible? As Jamal says in his EETimes’ interview, “bit times (time on the wire) for a 64-byte frame at GigE rates is 512ns.” That’s how.

 

 

 

I did not go to Embedded World in Nuremberg this week but apparently SemiWiki’s Bernard Murphy was there and he’s published his observations about three Zynq-based reference designs that he saw running in Aldec’s booth on the company’s Zynq-based TySOM embedded dev and prototyping boards.

 

 

Aldec TySOM-2 Prototyping Board.jpg

 

Aldec TySOM-2 Embedded Prototyping Board

 

 

 

Murphy published this article titled “Aldec Swings for the Fences” on SemiWiki and wrote:

 

 

“At the show, Aldec provided insight into using the solution to model the ARM core running in QEMU, together with a MIPI CSI-2 solution running in the FPGA. But Aldec didn’t stop there. They also showed off three reference designs designed using this flow and built on their TySOM boards.

 

“The first reference design targets multi-camera surround view for ADAS (automotive – advanced driver assistance systems). Camera inputs come from four First Sensor Blue Eagle systems, which must be processed simultaneously in real-time. A lot of this is handled in software running on the Zynq ARM cores but the computationally-intensive work, including edge detection, colorspace conversion and frame-merging, is handled in the FPGA. ADAS is one of the hottest areas in the market and likely to get hotter since Intel just acquired Mobileye.

 

“The next reference design targets IoT gateways – also hot. Cloud interface, through protocols like MQTT, is handled by the processors. The gateway supports connection to edge devices using wireless and wired protocols including Bluetooth, ZigBee, Wi-Fi and USB.

 

“Face detection for building security, device access and identifying evil-doers is also growing fast. The third reference design is targeted at this application, using similar capabilities to those on the ADAS board, but here managing real-time streaming video as 1280x720 at 30 frames per second, from an HDR-CMOS image sensor.”

 

The article contains a photo of the Aldec TySOM-2 Embedded Prototyping Board, which is based on a Xilinx Zynq Z-7045 SoC. According to Murphy, Aldec developed the reference designs using its own and other design tools including the Aldec Riviera-PRO simulator and QEMU. (For more information about the Zynq-specific QEMU processor emulator, see “The Xilinx version of QEMU handles ARM Cortex-A53, Cortex-R5, Cortex-A9, and MicroBlaze.”)

 

Then Murphy wrote this:

 

“So yes, Aldec put together a solution combining their simulator with QEMU emulation and perhaps that wouldn’t justify a technical paper in DVCon. But business-wise they look like they are starting on a much bigger path. They’re enabling FPGA-based system prototype and build in some of the hottest areas in systems today and they make these solutions affordable for design teams with much more constrained budgets than are available to the leaders in these fields.”

 

 

EETimes’ Junko Yoshida with some expert help analyzes this week’s Xilinx reVISION announcement

by Xilinx Employee ‎03-15-2017 01:25 PM - edited ‎03-22-2017 07:20 AM (4,761 Views)

 

Image3.jpgThis week, EETimes’ Junko Yoshida published an article titled “Xilinx AI Engine Steers New Course” that gathers some comments from industry experts and from Xilinx with respect to Monday’s reVISION stack announcement. To recap, the Xilinx reVISION stack is a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference.

 

(See “Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge.”)

 

As Xilinx Senior Vice President of Corporate Strategy Steve Glaser tells Yoshida, “Xilinx designed the stack to ‘enable a much broader set of software and systems engineers, with little or no hardware design expertise to develop, intelligent vision guided systems easier and faster.’

 

Yoshida continues:

 

While talking to customers who have already begun developing machine-learning technologies, Xilinx identified ‘8 bit and below fixed point precision’ as the key to significantly improve efficiency in machine-learning inference systems.

 

 

Yoshida also interviewed Karl Freund, Senior Analyst for HPC and Deep Learning at Moor Insights & Strategy, who said:

 

Artificial Intelligence remains in its infancy, and rapid change is the only constant.” In this circumstance, Xilinx seeks “to ease the programming burden to enable designers to accelerate their applications as they experiment and deploy the best solutions as rapidly as possible in a highly competitive industry.

 

 

She also quotes Loring Wirbel, a Senior Analyst at The Linley group, who said:

 

What’s interesting in Xilinx's software offering, [is that] this builds upon the original stack for cloud-based unsupervised inference, Reconfigurable Acceleration Stack, and expands inference capabilities to the network edge and embedded applications. One might say they took a backward approach versus the rest of the industry. But I see machine-learning product developers going a variety of directions in trained and inference subsystems. At this point, there's no right way or wrong way.

 

 

There’s a lot more information in the EETimes article, so you might want to take a look for yourself.

 

 

 

 

As part of today’s reVISION announcement of a new, comprehensive development stack for embedded-vision applications, Xilinx has produced a 3-minute video showing you just some of the things made possible by this announcement.

 

Here it is:

 

 

Adam Taylor’s MicroZed Chronicles, Part 177: Introducing the reVision stack

by Xilinx Employee ‎03-13-2017 10:39 AM - edited ‎03-22-2017 07:19 AM (5,759 Views)

 

By Adam Taylor

 

Several times in this series, we have looked at image processing using the Avnet EVK and the ZedBoard. Along with the basics, we have examined object tracking using OpenCV running on the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PS (processing system) and using HLS with its video library to generate image-processing algorithms for the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL (programmable logic, see blogs 140 to 148 here).

 

Xilinx’s reVision is an embedded-vision development stack that provides support for a wide range of frameworks and libraries often used for embedded-vision applications. Most exciting, from my point of view, is that the stack includes acceleration-ready OpenCV functions.

 

Image1.jpg 

 

 

The stack itself is split into three layers. Once we select or define our platform, we will be mostly working at the application and algorithm layers. Let’s take a quick look at the layers of the stack:

 

  1. Platform layer: This is the lowest level of the stack and is the one on which the remaining stack layers are built. This layer includes platform definitions of the hardware and the software environment. Should we choose not to use a predefined platform, we can generate a custom platform using Vivado.

 

  1. Algorithm layer: Here we create our application using SDSoC and the platform definition for the target hardware. It is within this layer that we can use the acceleration-ready OpenCV functions along with predefined and optimized implementations for Customized Neural Network (CNN) developments such as inference accelerators within the PL.

 

  1. Application Development Layer: The highest layer of the stack. Development here is where high-level frameworks such as Caffe and OpenVX are used to complete the application.

 

As I mentioned above one of the most exciting aspects of the reVISION stack is the ability to accelerate a wide range of OpenCV functions using the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL. We can group the OpenCV functions that can be hardware-accelerated using the PL into four categories:

 

  1. Computation – Includes functions such as absolute difference between two frames, pixel-wise operations (addition, subtraction and multiplication), gradient, and integral operations
  2. Input Processing – Supports bit-depth conversions, channel operations, histogram equalization, remapping, and resizing.
  3. Filtering – Supports a wide range of filters including Sobel, Custom Convolution, and Gaussian filters.
  4. Other – Provides a wide range of functions including Canny/Fast/Harris edge detection, thresholding, SVM, HoG, LK Optical Flow, Histogram Computation, etc.

 

What is very interesting with these function calls is that we can optimize them for resource usage or performance within the PL. The main optimization method is specifying the number of pixels to be processed during each clock cycle. For most accelerated functions, we can choose to process either one or eight pixels. Processing more pixels per clock cycle reduces latency but increases resource utilization. Processing one pixel per clock minimizes the resource requirements at the cost of increased latency. We control the number of pixels processed per clock in via the function call.

 

Over the next few blogs, we will look more at the reVision stack and how we can use it. However in the best Blue Peter tradition, the image below shows the result of running a reVision Harris OpenCV acceleration function within the PL when accelerated.

 

 

Image2.jpg

 

 

Accelerated Harris Corner Detection in the PL

 

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg

 

Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge

by Xilinx Employee ‎03-13-2017 07:37 AM - edited ‎03-22-2017 07:19 AM (7,863 Views)

 

Image3.jpgToday, Xilinx announced a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference. It’s called the reVISION stack and it allows design teams without deep hardware expertise to use a software-defined development flow to combine efficient machine-learning and computer-vision algorithms with Xilinx All Programmable devices to create highly responsive systems. (Details here.)

 

The Xilinx reVISION stack includes a broad range of development resources for platform, algorithm, and application development including support for the most popular neural networks: AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN. Additionally, the stack provides library elements such as pre-defined and optimized implementations for CNN network layers, which are required to build custom neural networks (DNNs and CNNs). The machine-learning elements are complemented by a broad set of acceleration-ready OpenCV functions for computer-vision processing.

 

For application-level development, Xilinx supports industry-standard frameworks including Caffe for machine learning and OpenVX for computer vision. The reVISION stack also includes development platforms from Xilinx and third parties, which support various sensor types.

 

The reVISION development flow starts with a familiar, Eclipse-based development environment; the C, C++, and/or OpenCL programming languages; and associated compilers all incorporated into the Xilinx SDSoC development environment. You can now target reVISION hardware platforms within the SDSoC environment, drawing from a pool of acceleration-ready, computer-vision libraries to quickly build your application. Soon, you’ll also be able to use the Khronos Group’s OpenVX framework as well.

 

For machine learning, you can use popular frameworks including Caffe to train neural networks. Within one Xilinx Zynq SoC or Zynq UltraScale+ MPSoC, you can use Caffe-generated .prototxt files to configure a software scheduler running on one of the device’s ARM processors to drive CNN inference accelerators—pre-optimized for and instantiated in programmable logic. For computer vision and other algorithms, you can profile your code, identify bottlenecks, and then designate specific functions that need to be hardware-accelerated. The Xilinx system-optimizing compiler then creates an accelerated implementation of your code, automatically including the required processor/accelerator interfaces (data movers) and software drivers.

 

The Xilinx reVISION stack is the latest in an evolutionary line of development tools for creating embedded-vision systems. Xilinx All Programmable devices have long been used to develop such vision-based systems because these devices can interface to any image sensor and connect to any network—which Xilinx calls any-to-any connectivity—and they provide the large amounts of high-performance processing horsepower that vision systems require.

 

Initially, embedded-vision developers used the existing Xilinx Verilog and VHDL tools to develop these systems. Xilinx introduced the SDSoC development environment for HLL-based design two years ago and, since then, SDSoC has dramatically and successfully shorted development cycles for thousands of design teams. Xilinx’s new reVISION stack now enables an even broader set of software and systems engineers to develop intelligent, highly responsive embedded-vision systems faster and more easily using Xilinx All Programmable devices.

 

And what about the performance of the resulting embedded-vision systems? How do their performance metrics compare against against systems based on embedded GPUs or the typical SoCs used in these applications? Xilinx-based systems significantly outperform the best of this group, which employ Nvidia devices. Benchmarks of the reVISION flow using Zynq SoC targets against Nvidia Tegra X1 have shown as much as:

 

  • 6x better images/sec/watt in machine learning
  • 42x higher frames/sec/watt for computer-vision processing
  • 1/5th the latency, which is critical for real-time applications

 

Image1.jpg 

 

There is huge value to having a very rapid and deterministic system-response time and, for many systems, the faster response time of a design that's been accelerated using programmable logic can mean the difference between success and catastrophic failure. For example, the figure below shows the difference in response time between a car’s vision-guided braking system created with the Xilinx reVISION stack running on a Zynq UltraScale+ MPSoC relative to a similar system based on an Nvidia Tegra device. At 65mph, the Xilinx embedded-vision system’s response time stops the vehicle 5 to 33 feet faster depending on how the Nvidia-based system is implemented. Five to 33 feet could easily mean the difference between a safe stop and a collision.

 

 

Image2.jpg 

 

(Note: This example appears in the new Xilinx reVISION backgrounder.)

 

 

The last two years have generated more machine-learning technology than all of the advancements over the previous 45 years and that pace isn't slowing down. Many new types of neural networks for vision-guided systems have emerged along with new techniques that make deployment of these neural networks much more efficient. No matter what you develop today or implement tomorrow, the hardware and I/O reconfigurability and software programmability of Xilinx All Programmable devices can “future-proof” your designs whether it’s to permit the implementation of new algorithms in existing hardware; to interface to new, improved sensing technology; or to add an all-new sensor type (like LIDAR or Time-of-Flight sensors, for example) to improve a vision-based system’s safety and reliability through advanced sensor fusion.

 

Xilinx is pushing even further into vision-guided, machine-learning applications with the new Xilinx reVISION Stack and this announcement complements the recently announced Reconfigurable Acceleration Stack for cloud-based systems. (See “Xilinx Reconfigurable Acceleration Stack speeds programming of machine learning, data analytics, video-streaming apps.”) Together, these new development resources significantly broaden your ability to deploy machine-learning applications using Xilinx technology—from inside the cloud to the very edge.

 

 

You might also want to read “Xilinx AI Engines Steers New Course” by Junko Yoshida on the EETimes.com site.

 

 

 

On Thursday, March 30, two member companies from the IIConsortium (Industrial Internet Consortium)—Cisco and Xilinx—are presenting a free, 1-hour Webinar titled “How the IIoT (Industrial Internet of Things) Makes Critical Data Available When & Where it is Needed.” The discussion will cover machine learning and how self-optimization plays a pivotal role in enhancing factory intelligence. Other IIoT topics covered in the Webinar include TSN (time-sensitive networking), real-time control, and high-performance node synchronization. The Webinar will be presented by Paul Didier, the Manufacturing Solution Architect for the IoT SW Group at Cisco Systems, and Dan Isaacs, Director of Connected Systems at Xilinx.

 

Register here.

 

 

Last month, the European AXIOM Project took delivery of its first board based on a Xilinx Zynq UltraScale+ ZU9EG MPSoC. (See “The AXIOM Board has arrived!”) The AXIOM project (Agile, eXtensible, fast I/O Module) aims at researching new software/hardware architectures for Cyber-Physical Systems (CPS).

 

 

AXIOM Project Board Based on Zynq UltraScale MPSoC.jpg

 

 

AXIOM Project Board based on Xilinx Zynq UltraScale+ MPSoC

 

 

 

The board in fact presents the pinout of an Arduino Uno so you can attach an Arduino Uno-compatible shield to the board. The presence of the Arduino UNO pinout enables fast prototyping and exposes the FPGA I/O pins in a user-friendly manner.

 

Here are the board specs:

 

  • Wide boot capabilities: eMMC, Micro SD, JTAG
  • Heterogeneus 64-bit ARM FPGA Processor: Xilinx Zynq Ultrascale+ ZU9EG MPSoC
    • 64-bit Quad core A53 @ 1.2GHz
    • 32-bit Dual core R5 @ 500MHz
    • DDR4 @ 2400MT/s
    • Mali-400 GPU @ 600MHz
    • 600K System Logic Cells
  • Swappable SO-DIMM RAM (up to 32Gbytes) for the Processing System, plus a soldered 1Gbyte RAM for Programmable Logic
  • 12 GTH transceivers @ 12.5 Gbps (8 on USB Type C connectors + 4 on HS connector)
  • Easy rapid prototyping, because of the Arduino UNO pinout

 

You can see the AXIOM board for the first time during next week’s Embedded World 2017 at the SECO UDOO Booth, at the SECO booth, and at the EVIDENCE booth.

 

Please contact the AXIOM Project for more information.

 

 

 

 

With a month left in the Indiegogo funding period, the MATRIX Voice open-source voice platform campaign stands at 289% of its modest $5000 funding goal. MATRIX Voice is the third crowdfunding project by MATRIX Labs, based on Miami, Florida. The MATRIX Voice platform is a 3.14-inch circular circuit board capable of continuous voice recognition and compatible with the latest voice-based, cognitive cloud-based services including Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai, and Houndify. The MATRIX Voice board, based on a Xilinx Spartan-6 LX4 FPGA, is designed to plug directly onto a low-cost Raspberry Pi single-board computer or it can be operated as a standalone board. You can get one of these boards, due to be shipped in May, for as little as $45—if you’re quick. (Already, 61 of the 230 early-bird special-price boards are pledged.)

 

Here’s a photo of the MATRIX Voice board:

 

 

MATRIX Voice board.jpg

 

 

This image of the top of the MATRIX Voice board shows the locations for the seven rear-mounted MEMS microphones, seven RGB LEDs, and the Spartan-6 FPGA. The bottom of the board includes a 64Mbit SDRAM and a connector for the Raspberry Pi board.

 

Because this is the latest in a series of developer boards from MATRIX Labs (see last year’s project: “$99 FPGA-based Vision and Sensor Hub Dev Board for Raspberry Pi on Indiegogo—but only for the next two days!”), there’s already a sophisticated, layered software stack for the MATRIX Voice platform that include a HAL (Hardware Abstraction Layer) with the FPGA code and C++ library, an intermediate layer with a streaming interface for the sensors and vision libraries (for the Raspberry Pi camera), and a top layer with the MATRIX OS and high-level APIs. Here’s a diagram of the software stack:

 

 

MATRIX Voice Software Stack.jpg 

 

And now, who better to describe this project than the originators:

 

 

 

 

 

 

 

 

An article in the new January, 2017 issue of the IIC (Industrial Internet Consortium’s) Journal of Innovation titled “Making Factories Smarter Through Machine Learning” discusses the networked use of SoC-e’s CPPS-Gate40 intelligent IIoT gateway to help a car-parts manufacturer keep the CNC machines on its production lines up and running through predictive maintenance directed by machine-learning algorithms. These algorithms use real-time operational data taken directly from sensors on the CNC machines to identify and learn normal behavior patterns during the machining process so that when variances signaling an imminent failure occur, systems can be shut down gracefully and maintained or repaired before the failure becomes truly catastrophic (and really, really expensive thanks to any uncontrolled release of the kinetic energy stored as angular momentum in an operating CNC machine).

 

Catastrophic CNC machine failures can shut down a production line, causing losses worth hundreds of thousands of dollars (or more) in physical damage to tools and to work in process, in addition to the costs associated with lost production time. In one example cited in the article, a bearing in a CNC machine started to fail, as indicated by a large vibration spike. At that point, only the bearing needed replacement. Four days later, the bearing failed catastrophically damaging nearby parts and idling the production line for three shifts. There was plenty of warning (see image below) and preventative maintenance at the first indication of a problem would have minimized the cost of this single failed bearing.

 

 

CNC Failure.jpg 

 

Unfortunately, the data predicting the failure had been captured but not analyzed until afterwards because there was no real-time data collection-and-analysis system in place. What a needless waste.

 

The network based on SoC-e’s CPPS-Gate40 intelligent IIoT gateway discussed in this IIC Journal of Innovation article is designed to collect and analyze real-time operational information from the CNC machines including operating temperature and vibration data. This system performs significant data reduction at the gateway to minimize the amount of data feeding the machine-learning algorithms. For example, FFT processing shrinks the time-domain vibration data down to just a frequency and an amplitude, resulting in significant local data reduction. Temperature data varies more slowly and so it is sampled at a much lower frequency—variable-rate collection and fusion for different sensor data is another significant feature of this system. The full system then trains on the data collected by the networked IIoT gateways.

 

This is a simple and graphic example of the sort of return that companies can expect from properly implemented IIoT systems with the performance needed to operate real-time manufacturing systems.

 

SoC-e’s CPPS-Gate40 is based on a Xilinx Zynq SoC, which implements a variety of IIoT-specific, hard-real-time functions developed by SoC-e as IP cores for the Zynq SoC's programmable logic including the HSR/PRP/Ethernet switch (HPS), IEEE 1588-2008 Precision Time Protocol (see “IEEE 1588-2008 clock synchronization IP core for Zynq SoCs has sub-μsec resolution”), and real-time sensor data acquisition and fusion. SoC-e also uses the Zynq SoC to implement a variety of network security protocols. These are the sort of functions that require the flexibility of the Zynq SoC’s integrated programmable logic. Software-based implementations of these functions are simply impractical due to performance requirements.

 

For more information about the SoC-e IIoT Gateway, see “Intelligent Gateways Make a Factory Smarter” and “Big Data Analytics + High Availability Network = IIoT: SoC-e demo at Embedded World 2016.”

 

 

 

An Industrial Ethernet (IIoT) power supply reference design for the Xilinx Zynq-7000 SoC developed by Monolithic Power Systems (MPS) combines small footprint (0.45in2 of board real estate) with good efficiency (78% from a 12V input) and tight regulation. The design consists of six MPS regulators: three MPM3630 3A buck regulators, one MPM3610 1A buck regulator, and two LDO regulators to supply the twelve power rails needed by the Zynq SoC.

 

Here’s a simple block diagram of MPS’ reference design:

 

 

 

MPS IIoT Zynq SoC Ref Design.jpg

 

 

IIoT power supply reference design for the Zynq SoC from Monolithic Power Systems

 

 

 

And here’s a close-up photo of MPS’ compact IIoT power supply design prototype:

 

 

MPS IIoT Zynq SoC Ref Design Board Photo.jpg 

 

 

For more information about the MPS power supply reference including a BOM and data sheets for the various regulators used in the design, click here.

 

 

Late last year at the SPS Drives show in Germany, BE.services demonstrated a vertically integrated solution stack for industrial controls running on a Zynq-based Xilinx ZC702 Eval Kit. The BE.services industrial automation stack for Industry 4.0 and IIoT applications includes:

 

 

  • Linux plus OSADL real-time extensions
  • POWERLINK real-time, industrial Ethernet
  • CODESYS SoftPLC
  • Matrikon OPC UA machine-to-machine protocol for industrial automation
  • Ethernet TSN (time-sensitive networking) with hardware IP support from Xilinx
  • Kaspersky security

 

 

This stack delivers the four critical elements you need when developing smart manufacturing controllers:

 

  • Smart control
  • Real-time operation
  • Connectivity
  • Security

 

 

Industry 4.0 elements.jpg 

 

 

Here’s a 3-minute demo of that system with explanations:

 

 

 

 

The fundamental advantage to using a Xilinx Zynq SoC with its on-chip FPGA array for this sort of application is deterministic response, explains Dimitri Philippe in the video. Philippe is the founder and CEO of BE.services. Programmable logic delivers this response with hardware-level latencies instead of software latencies that are orders of magnitude slower.

 

 

Note: You can watch a recent Xilinx Webinar on the use of OPC UA and TSN for IIoT applications by clicking here.

 

 

 

The IIC (Industrial Internet Consortium) announced its first security assessment-focused testbed for Industrial IoT (IIoT) systems, the Security Claims Evaluation Testbed, in February 2016. This testbed provides an open, highly configurable cybersecurity platform for evaluating the security capabilities of endpoints, gateways, and other networked components. Data sources to the testbed can include industrial, automotive, medical, and other related endpoints.

 

IIC member companies have developed a common security framework and an approach to assessing cybersecurity in IIoT systems: the Industrial Internet Security Framework (IISF). The IIC’s Security Claims Testbed helps manufacturers improve the security posture of their products and verify alignment to the IISF prior to product launch, which helps shorten time to market.

 

If you’d like to hear about these topics in more detail, the IIC is presenting a free 1-hour Webinar on January 26. (Available later, on demand.) Register here.

 

Here’s a graphic depicting the IIC’s Security Claims Evaluation Testbed:

 

 

IIC Security Claims Testbed.jpg 

 

 

Note: Xilinx is one of the lead member companies involved in the development of the IIC Security Claims Testbed—others include Aicas, GlobalSign, Infineon, Real-Time Innovations, and UL (Underwriters Laboratories)—and if you look at the above graphic, you’ll see the SoC-e Gateway in the middle of everything. This gateway is based on a Xilinx Zynq SoC. For more information about the SoC-e IIoT Gateway, see “Intelligent Gateways Make a Factory Smarter” and “Big Data Analytics + High Availability Network = IIoT: SoC-e demo at Embedded World 2016.”

 

 

 

 

Yesterday, National Instruments (NI) along with 15 partners announced the grand opening of the new NI Industrial IoT Lab located at NI’s headquarters in Austin, TX. The lab is a working showcase for Industrial IoT technologies, solutions and systems architectures and will address challenges including interoperability and security in the IIoT space. The partner companies working with NI on the lab include:

 

  • Analog Devices
  • Avnu Alliance
  • Cisco Systems
  • Hewlett Packard Enterprise
  • Industrial Internet Consortium
  • Intel
  • Kalypso
  • OPC Foundation
  • OSIsoft
  • PTC
  • Real-Time Innovations
  • SparkCognition
  • Semikron
  • Viewpoint Systems
  • Xilinx

 

 

NI IIoT Lab Grand Opening with Jamie Smith.jpg

 

 

NI’s Jamie Smith (on the left), NI’s Business and Technology Director, opens the new NI Industrial IoT Lab in Austin, TX

 

Three key challenges to widespread IIoT adoption according to Xilinx’s Dan Isaacs

by Xilinx Employee ‎12-12-2016 11:44 AM - edited ‎12-12-2016 11:47 AM (5,808 Views)

RTC Magazine IIoT Issue.jpg 

A recent issue of RTC magazine carried an interview with Dan Isaacs, Director of Connected Systems at Xilinx. There are many cogent observations about the Industrial Internet of Things (IIoT) in this interview including the three key challenges to widespread IIoT adoption that Dan sees:

 

 

  • Security (overcoming companies’ concerns about connecting their systems and making them accessible over the internet)
  • Standardization – considering the infrastructure already in place at a given facility, and costs to change existing connectivity approaches
  • Data ownership – who owns the data once connected?

 

 

Where’s the path to meet these challenges?

 

Quoting Dan from the RTC Magazine article:

 

“The Industrial Internet Consortium (IIC), a highly collaborative 200+ member strong global consortium of companies, is working on several approaches through reference architectures, security frameworks, and proof of concept test beds to identify and bring innovative methodologies and solutions to address these and other IIoT challenges.”

 

For more Dan Isaacs IIoT insights like this, see the full interview on page 6 of the magazine.

 

 

 

 

Ask any expert in IIoT (Industrial Internet of Things) circles what the most pressing IIoT problem might be and you will undoubtedly hear “security.” Internet hacking stories are rampant. You generally hear about one a day. With the IoT and IIoT ramping up, they’re going to get more frequent. Over the recent Thanksgiving Weekend, the ticket machines and fare-collection system of San Francisco’s Muni light-rail, mass-transit system was hacked by ransomware. Agents' computer screens displayed the message "You Hacked, ALL Data Encrypted" beginning Friday night. The attackers demanded 100 Bitcoins, worth about $73,000, to undo the damage. Things were restored by Sunday without paying the ransom and Muni provided free rides until the system could be recovered.

 

You do not want to let this happen to your IIoT system design.

 

How to prevent it? Today (by sheer coincidence, honest), Avnet announced a new security module for its MicroZed IIoT (Industrial Internet of Things Starter Platform), which is based on a Xilinx Zynq Z-7000 SoC. The new Avnet Trusted Platform Module Security PMOD places an Infineon OPTIGA TPM (Trusted Platform Module) SLB9670 on a very small plug-in board conforming to the Digilent PMOD peripheral module format. The Infineon TPM SLB9670 is a secure microprocessor that adds hardware security to any system by conforming to the TPM security standard developed by the Trusted Computing Group, an international industry standardization group.

 

The $29.95 Avnet Trusted Platform Module Security PMOD is essentially a SPI security peripheral that provides many security services to your design based on Trusted computing Group standards. Provided services include:

 

 

  • Strong authentication of platform and users using a unique embedded endorsement certificate
  • Secure storage and management of keys and data
  • Measured and trusted booting for embedded systems
  • Random-number generation, tick counting to trigger the generation of new random numbers, and a dictionary-attack lockout
  • RSA, ECC, and SHA-256 encryption

 

 

That’s a lot of security in a 32-pin package and, for development purposes, you can get it on a $30 plug-in PMOD along with a reference design for using the module with the Zynq-based Avnet MicroZed IIoT Starter Kit.

 

So if you don’t want to see this in your IIoT system:

 

 

You Hacked.jpg 

 

Then think about buying this:

 

 

Avnet MicroZed IIoT TPM Module.jpg

 

Avnet MicroZed IIoT TPM PMOD

 

 

 

 

 

 

Programmable logic control of power electronics—where to start? What dev boards to use?

by Xilinx Employee ‎10-18-2016 10:24 AM - edited ‎10-18-2016 10:28 AM (7,833 Views)

 

A great new blog post on the ELMG Web site discusses three entry-level dev boards you can use to learn about controlling power electronics with FPGAs. (This post follows a Part 1 post that discusses the software you can use—namely Xilinx Vivado HLS and SDSoC—to develop power-control FPGA designs.)

 

And what are those three boards? They should be familiar to any Xcell Daily reader:

 

 

The $99 Digilent ARTY dev board (Artix-7 FPGA)

 

ARTY Board v2 White.jpg 

 

 

 

The Avnet ZedBoard (Zynq Z-7000 SoC)

 

ZedBoard V2.jpg

 

 

 

 

 

The Avnet MicroZed SOM (Zynq Z-7000 SoC)

 

 

MicroZed V2.jpg

 

 

 

 

Who is ELMG? They’ve spent the last 25 years developing digitally controlled power converters in motor drives, industrial switch mode power supplies, reactive power compensation, medium voltage system, power quality systems, motor starters, appliances and telecom switch-mode power supplies.

 

 

For more information about the ARTY board, see: ARTY—the $99 Artix-7 FPGA Dev Board/Eval Kit with Arduino I/O and $3K worth of Vivado software. Wait, What????

 

 

For more information about the MicroZed and the ZedBoard, see the 150+ blog posts in Adam Taylor’s MicroZed Chronicles.

 

 

 

MATRIX Labs bills the $99 MATRIX Creator dev board for the Raspberry Pi, listed on Indiegogo, as a “hardware bombshell.” A more precise description would be and FPGA-accelerated sensor hub sporting a massive array of on-board sensors. It’s a one stop shop for prototyping IoT and industrial IoT (IIoT) devices using the Raspberry Pi board as a base.

 

Here’s a top-and-bottom photo of the board:

 

 
MATRIX Creator Dev Board.jpg

 

MATRIX Creator dev board for the Raspberry Pi

 

 

Note: That square hole in the center of the board allows the Raspberry Pi’s Camera Module to peek through.

 

Here’s a detailed list of the various components on the MATRIX Creator board and its capabilities:

 

 

MATRIX Creator Dev Board Components.jpg

 

 

MATRIX Creator dev board Components and Capabilities

 

 

 

Note that one of those components is a Xilinx Spartan-6 LX4 FPGA, which makes a very fine low-cost sensor hub capable of operating in real time. No doubt you’d like to see how the FPGA fits into this board. MATRIX Labs has that covered with this block diagram:

 

 

MATRIX Creator Dev Board Block Diagram.jpg

 

 

MATRIX Creator dev board Block Diagram

 

 

 

MATRIX Labs has also developed supporting tools and software for the MATRIX Creator dev board including MATRIX OS, MATRIX CV (a computer-vision library), and MATRIX CLI (a sensor-hub application). More software is already being developed.

 

Unlike many crowd-funded projects, MATRIX Creator is already shipping so you’re assured of getting a board, according to MATRIX Labs, but you only have two days left in the funding period. So check it out now.

 

 

MATRIX Creator: IoT Computer Vision Dev Board

 

 

A novel development by Zhao Tian, Kevin Wright, and Xia Zhou at Dartmouth College encodes data streams on sparse, ultrashort pulses and transmits these pulses optically using low-cost, visible LED luminaires designed for room illumination. (See “The DarkLight Rises: Visible Light Communication in the Dark”) The optical light pulses are hundreds of nanoseconds long so they’re far too short—by four or five orders of magnitude—to be perceived as light by human vision. However, they’re long enough to be captured by an inexpensive photodiode and are therefore useful for digital communications—albeit slow communications, on the order of 1.6 to 1.8 kbits/sec. Nevertheless, that rate meets a number of low-speed communications needs including many IoT and industrial IoT requirements.

 

Signal encoding employs OPPM (overlapping pulse position modulation), implemented in a $149 Xilinx Artix-7 A35T FPGA on a Digilent Basys 3 FPGA Trainer Board.

 

 

Digilent Basys 3 Artix-7 FPGA Trainer Board.jpg

 

Digilent Basys 3 Artix-7 FPGA Trainer Board

 

 

Here’s a short 1-minute video giving you an ultrashort overview of the project:

 

 

 

 

 

If you’re developing products in the IIoT (Industrial Internet of things) space, you’ll likely want to sign up for a free 1-hour Webinar that Xilinx’s Director of Strategic Marketing for Industrial IoT Dan Isaacs will present on October 12 in conjunction with Automation World. Dan will be discussing the history of the IIoT starting with the old way of doing things using islands of automation to today, where everything in the factory delivers data via a network to a central repository for analysis, action, and optimization.

 

Register here.

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.