Here are two reasons you might want to participate in this Kickstarter campaign:
The Mycroft Mark II is a hands-free, privacy-oriented, open-source smart speaker with a touch screen. It has advanced far-field voice recognition and multiple wake words for voice-based cloud services such as Amazon’s Alexa and Google Home, courtesy of Aaware’s technology. (See “Looking to turbocharge Amazon’s Alexa or Google Home? Aaware’s Zynq-based kit is the tool you need.”) The finished smart speaker requires a pledge of $129 (or $299 for three of them) but the dev kit version of the Mycroft Mark II requires a pledge of only $99, which is cheap as dev kits go. (Note: there are only 88 of these kits left, as of this writing.)
You could look at the Mycroft Mark II as a general-purpose, $99 Zynq UltraScale+ MPSoC open-source dev kit with a touch screen that’s also been enabled for voice control, which you can use as a platform for a variety of IIoT, cloud computing, or embedded projects. That in itself is a very attractive offer. As the Mycroft Mark II Kickstarter project page says: “The Mark II has special features that make hacking and customizing easy, not to mention thorough documentation and a community to lean on when building. Support for our community is central to the Mycroft mission.” That’s a lot for a sub-$100 dev kit, don’t you think?
First, the YouTube video demonstrates the IoT design interacting with an app on a mobile phone. Then video takes you step-by-step through the creation process using the Xilinx Vivado development environment.
The YouTuber writes:
“I implemented a web server using Python and bottle framework, which works with another C++ application. The C++ application controls my custom IPs (such as PWM) implemented in PL block. A user can control LEDs, 3-color LEDs, buttons and switches mounted on ZYBO board.”
The YouTube video’s Web page also lists the resources you need to recreate the IoT design:
TSN (time-sensitive networking) is a set of evolving IEEE standards that support a mix of deterministic, real-time and best-effort traffic over fast Ethernet connections. The TSN set of standards is bocming increasingly important in many industrial networking sutuations, particularly for IIoT (the Industrial Internet of Things). SoC-e has developed TSN IP that you can instantiate in Xilinx All Programmable devices. (Because the standards are still evolving, implementing the TSN hardware in reprogrammable hardware is a good idea.)
The latest hypervisor to host Wind River’s VxWorks RTOS alongside with Linux is the Xen Project Hypervisor, an open-source virtualization platform from the Linux Foundation. DornerWorks has released a version of the Xen Project Hypervisor called Virtuosity (the hypervisor formerly known as the Xen Zynq Distribution) that runs on the Arm Cortex-A53 processor cores in the Xilinx Zynq UltraScale+ MPSoC. Consequently, Wind River has partnered with DornerWorks to provide a Xen Project Hypervisor solution for VxWorks and Linux on the Xilinx Zynq UltraScale+ MPSoC ZCU102 eval kit.
Having VxWorks and Linux running on the same system allows developers to create hybrid software systems that offer the combined advantages of the two operating systems, with VxWorks managing mission-critical functions and Linux managing human-interactive functions and network cloud connection functions.
Wind River has just published a blog about using VxWorks and Linux on the Arm cortex-A53 processor, concisely titled “VxWorks on Xen on ARM Cortex A53,” written by Ka Kay Achacoso. The blog describes an example system with VxWorks running signal-processing and spectrum-analysis applications. Results are compiled into a JSON string and sent through the virtual network to Ubuntu. On Ubuntu, the Apache2 HTTP server sends results to a browser using Node.js and Chart.js to format the data display.
Here’s a block diagram of the system in the Wind River blog:
VxWorks and Linux Hybrid OS System
VxWorks runs as a guest OS on top of the unmodified Virtuosity hypervisor.
For more information about DornerWorks Xen Hypervisor (Virtuosity), see:
Today, Xilinx announced plans to invest $40M to expand research and development engineering work in Ireland on artificial intelligence and machine learning for strategic markets including cloud computing, embedded vision, IIoT (industrial IoT), and 5G wireless communications. The company already has active development programs in these categories and today’s announcement signals an acceleration of development in these fields. The development was formally announced in Dublin today by The Tánaiste (Deputy Prime Minister of Ireland) and Minister for Business, Enterprise and Innovation, Frances Fitzgerald T.D., and by Kevin Cooney, Senior Vice President, Chief Information Officer and Managing Director EMEA, Xilinx Inc. The new investment is supported by the Irish government through IDA Ireland.
Xilinx first established operations in Dublin in 1995. Today, the company employs 350 people at its EMEA headquarters in Citywest, Dublin, where it operates a research, product development, engineering, and an IT center along with centralized supply, finance, legal, and HR functions. Xilinx also has R&D operations in Cork, which the company established in 2001.
The smart factories envisioned by Industrie 4.0 are not in our future. They’re here, now and they rely heavily on networked equipment. Everything needs to be networked. In the past, industrial and factory applications relied on an entire zoo of different and incompatible networking “standards” and some of these were actually standards. Today, the networking standard for Industrie 4.0 and IIoT applications is pretty well understood to be Ethernet—and that’s a problem because Ethernet is not deterministic. In the world of factory automation, non-deterministic networks are a “bad thing.”
These workshops start in November and run through March of next year and there are four full-day workshops in the series:
Developing Zynq Software
Developing Zynq Hardware
Integrating Sensors on MiniZed with PetaLinux
A Practical Guide to Getting Started with Xilinx SDSoC
You can mix and match the workshops to meet your educational requirements. Here’s how Avnet presents the workshop sequence:
These workshops are taking place in cities all over North America including Austin, Dallas, Chicago, Montreal, Seattle, and San Jose, CA. All cities will host the first two workshops. Montreal and San Jose will host all four workshops.
A schedule for workshops in other countries has yet to be announced. The Web page says “Coming soon” so please contact Avnet directly for more information.
Finally, here’s a 1-minute YouTube video with more information about the workshops
For more information on and to register for the Avnet MiniZed SpeedWay Design Workshops, click here.
Today, Linaro Ltdannounced that Xilinx has joined the Linaro IoT and Embedded (LITE) Segment Group, which is a collaborative working group of companies working to reduce fragmentation in operating systems, middleware, and cloud connectivity by delivering open-source device reference platforms to enable faster time to market, improved security, and lower maintenance costs for connected products.
LITE’s Director Matt Locke said: “Discussions between Linaro and Xilinx have ranged from LITE gateway and security work through networking, 96Boards, and Xilinx All Programmable SoC and MPSoC platforms. I expect initial collaboration will focus on the gateway, but I look forward to building on this relationship to bring the benefits of collaborative, open-source engineering to other areas in Xilinx’s broad range of product offerings.”
Today’s announcement focuses on LITE’s ongoing work in the IoT and embedded space. “Becoming a member of the LITE group will enable Xilinx to optimize the Linaro open source stacks with our All Programmable SoCs”, said Tomas Evensen, CTO Embedded Software at Xilinx.
Last week, Aerotenna announced its ready-to-fly Smart Drone Development Platform, based on its OcPoC with Xilinx Zynq Mini Flight Controller. Other components in the $3499 Drone Dev Platform include Aerotenna’s μLanding radar altimeter, three Aerotenna μSharp-Patch collision avoidance radar sensors, one Aerotenna CAN hub, a remote controller, and a pre-assembled quadcopter airframe:
The OcPoC flight controller uses the Zynq SoC’s additional processing power and I/O flexibility—embodied in the on-chip programmable FPGA fabric and programmable I/O—to handle the immense sensor load presented to a drone in flight through sensor fusion and on-board, real-time processing. The Aerotenna OcPoC flight controller handles more than 100 sense inputs.
How well do all of these Aerotenna drone components work together? Well, one indication of how well integrated they are is another announcement last week—made jointly by Aerotenna and UAVenture of Switzerland—to roll out the Precision Spray Autopilot—a “simple plug-and-play solution, allowing quick and easy integration into your existing multirotor spraying drones.” This piece of advanced Ag Tech is designed to create smart drones for agricultural spraying applications.
The Precision Spray Autopilot in Action
The Precision Spray Autopilot’s features include:
Fly spray missions from a tablet or with a remote control
Radar-based, high-performance terrain following
Real-time adjustment of flight height and speed
In-flight spray-rate monitoring and control
Auto-refill and return to mission
Field-tested in simple and complex fields
What’s even cooler is this 1-minute demo video of the Precision Spray Autopilot in action:
The term Industrial IoT (IIoT) refers to a multidimensional, tightly coupled chain of systems involving edge devices, cloud applications, sensors, algorithms, safety, security, vast protocol libraries, human-machine interfaces (HMI), and other elements that must interoperate. If you’re designing equipment destined for IIoT networks, you have a lot of requirements to meet. This article discusses several.
Some describe the IIoT as a convergence of information technology (IT) and operational technology (OT). The data-intensive nature of IT applications requires all these elements to come together with critical tasks performed reliably and on schedule. There’s usually a far more time-sensitive element to the OT applications. Designers generally meet these diverse IIoT requirements and challenges using embedded electronics at the IIoT edge (e.g., motion controllers, protection relays, programmable logic controllers, and similar systems) because embedded systems support deterministic communication and real-time control.
Equipment operating on IIoT networks at timescales on the order of hundreds of microseconds (or less) often need to operate in factories and remote locations for decades without being touched—but they can be updated remotely via the networks that connect them. Relying solely on multicore embedded processors in these applications can lead to a series of difficult and costly marketing and engineering trade-offs focused on managing functional timing issues and performance bottlenecks. A more advanced approach that manages determinism, latency, and performance while eliminating interference between the IT and OT domains and within subsystems in the OT domain produces better results.
Sometimes, you just need hardware to meet these challenges because software is just too slow, even when running on multiple processor cores. Augmenting static microprocessor architectures with specialized hardware to create a balanced division of labor is not a new concept in the world of embedded electronics. What is new is the need to adapt both the tasks and the division of labor over time. For example, an upgraded predictive-maintenance algorithm might require more sensor inputs than previous inputs—or entirely new types of sensors with new types of interfaces. These sensors invariably require local processing as well to offload the centralized cloud application that’s crunching the data from all of the edge nodes. Offloading the incremental sensor-processing calculations to hardware maintains the overall loading and avoids overburdening the edge processor.
TSN and Legacy Industrial Networks
The IIoT networks linking these new systems are equally dynamic. They evolve almost daily. Edge and system-wide protocols including OPC-UA (the OPC Foundation Open Platform Communications-Unified Architecture) and DDS (Data Distribution Service for Real-Time Systems) are gaining significant momentum. Both of these protocols benefit from time-sensitive networking (TSN), a deterministic Ethernet-based transport that manages mixed-criticality data streams. TSN significantly advances the vision of a unified network protocol across the edge and throughout the majority of the IIoT solution chain because it supports varying degrees of scheduled traffic alongside best-effort traffic.
The goal is to get TSN integrated into the IIoT Endpoint to enable scheduled traffic versus best-effort traffic with minimum impact on control function timing. Yet TSN is an evolving standard so using ASICs or ASSP chipsets developed before all aspects of the TSN standard and market-specific profiles are finalized carry some risk. Similarly, attempting to add TSN support to an existing controller using a software-only approach may exhibit unpredictable timing behavior and might not meet timing requirements.
Ultimately, TSN requires a form of time-awareness not available in controllers today. A good TSN implementation requires the addition of both hardware and software—something that’s easily done using a device that integrates processors and programmable hardware like the Xilinx Zynq SoC and Zynq UltraScale+ MPSoC. These devices minimize the effects of adding TSN capabilities by implementing bandwidth-intensive, time-critical functions in hardware without significant impact to the software timing. (Xilinx offers an internally developed, fully standards-compatible, optimized TSN subsystem for the Zynq SoC and Zynq UltraScale+ MPSoC device families.)
Because industrial networking not new, IIoT systems will need to support the lengthy list of legacy industrial protocols that have been developed and used throughout the industry’s past. This need will exist for many years. Most modern SoCs don’t offer support and cannot easily be retrofitted for even a small fraction of these industrial protocols. In addition, the number of network interfaces that one controller must support can often exceed an SoC’s I/O capabilities. In contrast, the programmable hardware and I/O within Zynq SoCs and Zynq UltraScale+ MPSoCs easily support these legacy protocols without causing the unwanted timing side effects to mainstream software and firmware that a software-based networking approach might cause.
Security and the IIoT
IIoT design must follow a “defense-in-depth” approach to cybersecurity. Defense in depth is a form of multilayered security that reaches all the way from the supply chain to the end-customers’ enterprise and cloud application software. (That’s a very long chain—and one that requires its own article. This article’s scope is the chain of trust for deployed embedded electronics at the IIoT edge.)
With the network extending to the analog-digital boundary, data needs to be secured as soon as it enters the digital domain—usually at the edge. Defense-in-depth security requires a strong hardware root of trust that starts with secure and measured boot operations; run-time security through isolation of hardware, operating systems, and software; and secure communications. The entire network should employ trusted remote attestation servers for independent validation of credentials, certificate authorities, and so forth.
Security is not a static proposition. Five notable revisions have been made to the transport layer security (TLS) secure messaging protocol since 1995, with more to come. Cryptographic algorithms that underscore protocols like TLS can be implemented in software but such changes on the IT side can adversely affect time-critical OT performance. Architectural tools such as hypervisors and other isolation methods can reduce this impact but it is also possible to pair these software concepts with the ability to support new, and even yet-to-be-defined cryptographic functions years after equipment deployment if the design is based on devices that incorporate programmable hardware like the Zynq SoC and Zynq UltraScale+ MPSoC.
The Xilinx Technology Showcase 2017 will highlight FPGA-acceleration as used in Amazon’s cloud-based AWS EC2 F1 Instance and for high-performance, embedded-vision designs—including vision/video, autonomous driving, Industrial IoT, medical, surveillance, and aerospace/defense applications. The event takes place on Friday, October 6 at the Xilinx Summit Retreat Center in Longmont, Colorado.
Pinnacle Imaging Systems’ configurable Denali-MC HDR video and HDR still ISP (Image Signal Processor) IP can support 29 different HDR-capable CMOS image sensors including nine Aptina/ON Semi, six Omnivision, and eleven Sony sensors and twelve different pixel-level gain and frame-set HDR methods using 16-bit processing. The IP can be useful in a wide variety of applications including but certainly not limited to:
Intelligent Traffic systems
Pinnacle’s Denali-MC Image Signal Processor Core Block Diagram
Pinnacle has implemented its Denali-MC IP on a Xilinx Zynq Z-7045 SoC (from the photo on the Denali-MC product page, it appears that Pinnacle used a Xilinx Zynq ZC706 Eval Kit as the implementation vehicle) and it has produced this impressive 3-minute video of the IP in real-time action:
Please contact Pinnacle directly for more information about the Denali-MC ISP IP. The data sheet for the Denali-MC ISP core is here.
Avnet’s Zynq-based MiniZed is one of the most interesting dev boards we have looked at in this series. Thanks to its small form factor and its WiFi and Bluetooth capabilities, it is ideal for demonstrating Internet of Things (IoT) applications. We are now going to combine the FLIR Lepton camera module with the MiniZed and use them both to create a simple IOT application.
The approach I am going to follow for this demonstration is to update the MiniZed PetaLinux hardware design to do the following:
Interface with the FLIR Lepton camera module
Implement a video-processing pipeline that supports a 7-inch touch display connected to the MiniZed’s Pmod ports
The use of the local 7-inch touch display has two purposes. First, it demonstrates that the FLIR Lepton camera and the MiniZed are correctly working before I invest too much time in getting WiFi image transmission working. Second, the touch display could be used for local control and display, if required in an industrial (IIoT) application for example.
Opening the existing MiniZed Vivado project, you will notice it contains the Zynq (for the first time a single core Zynq) and an RTL block that interfaces with the WiFi and Bluetooth radio modules. This interface uses processing systems’ (PS’) SDIO0 for the WiFi interface and UART0 for Bluetooth. When we develop software, we must therefore remember to define the STDIN/STDOUT as being PS UART1 if we need a UART for debugging.
To this diagram we will add the following IP Blocks:
Quad SPI Core – Configured for single-mode operation. Receives the VoSPI from the Lepton.
Video Timing Controller – Generates the video timing signals for display output.
VDMA – Reads an image from the PS DDR and converts it into a PL (programmable logic) AXI Stream.
AXI Stream to Video Out – Converts the AXI Streamed video data to parallel video with timing synchronization provided by the Video Timing Core.
Zed_ALI3_Controller – Display controller for the 7-inch touch-screen display.
The Zed_ALI3_Controller IP block can be downloaded from the AVNET GitHub. Once downloaded, running the TCL script within the Vivado project will create an IP block we can include in our design.
The clocking architecture is now a little more complicated and includes the new Zed_ALI3_Controller block. This module generates the pixel clock, which is supplied to the VTC and the AXIS to Video blocks. Zynq-generated clocks provide the reference clock to the Zed_ALI3_Controller (33.33MHz) and the AXI Networks.
This demonstration uses two AXI networks. The first is the General-Purpose network. Te software uses this GP AXI network to configure IP blocks within the PL including the VDMA and VTC.
The second AXI network uses the High Performance AXI interface to transfer images from the PS DDR memory into the image-processing stream in the PL.
The complete block diagram
To connect the FLIR Lepton camera module, we will connect it as we did previously (p1 & p2) to the MiniZed shield connector, making use of the shield’s I2C and SPI connections.
The I2C pins are mapped into the constraints file already used for the temperature and motion sensors. Therefore, all we need to do is add the SPI I/O pin locations and standards.
The FLIR Lepton camera’s AREF supply pin is not enabled. To power the camera on the shield connector as in the previous example, we take 5V power from a flying lead connected to the opposite shield connector’s 5V supply and the back of the FLIR Lepton camera.
FLIR Lepton Connected to the MiniZed in the Shield Header
We’ll need both Pmod connectors To output the image to the 7-inch display. The pin-out required appears below. The differential pins on the Pmod connector are used for the video output lines with the I/O standard set to TMDS_33.
With the basic hardware design in place all that remains now is to generate the software builds. Initially, I will build a bare metal application to verify that this design functions as intended. This step-by-step process stems from my strong belief in incremental verification as a project progresses.
You need to install the MiniZed board definition files into your Vivado /data/boards/board_files directory to work with the MiniZed dev board. If you have not already done so, they are available here.
Time-sensitive networking (TSN) makes the IIoT (industrial Internet of things) run on time and if you are developing any IIoT or Industrie 4.0 equipment, you’ll need to know about and then use the deterministic TSN protocol. Xilinx has two time-sensitive TSN announcements you need to know about sooner rather than later.
First, Xilinx’s Product Manager for Industrial Applications Michael Zapke will present a free TSN Webinar on September 7 at 7am PDT. Register for Michael’s TSN Webinar here.
Second, Xilinx has just put its IEEE-compliant 100M/1G TSN Subsystem IP core (with one year of maintenance) on sale for a “significantly reduced price,” now through September 29 through a TSN Headstart program. (Sorry, that’s all I’m allowed to say about the sale.) However at this price, you will definitely want to check into this sale price if you’re developing IIoT equipment based on Xilinx’s Zynq SoC or Zynq UltraScale+ MPSoC.
If you want to learn more about this sale and wish to request access to additional information about the TSN Subsystem IP core, click here.
Note: The number of TSN Headstart program participants is limited, so act sooner rather than later—like now—if this offer interests you.)
For more TSN coverage in Xilinx’s Xcell Daily blog, see:
The first part of the article gives a detailed description of the HSR and PRP protocols. PRP is implemented in the network nodes rather than in the network. PRP nodes have two Ethernet ports and are called Dual Attached Nodes (DANs). Each DAN Ethernet port connects to one of two independent Ethernet networks (LAN A and LAN B), implementing a dual-redundant network topology. DANs send the same frames over both networks. HSR redundancy relies on sending packets in both directions through a ring network.
Here’s a diagram from the article showing an example of an HSR ring-based network topology:
As with PRP, each HSR network node again has two Ethernet ports and connects to the network as a Doubly Attached Node with HSR (DANH). Packets travel through the nodes in both directions in the HSR ring so a single break anywhere in the network can be detected while packet traffic continues to reach all destinations. The Red Box in the diagram is a DANH adapter for conventional Ethernet equipment that lacks DANH network connectivity. (The PRP protocol also supports the Red Box concept for equipment with only one Ethernet port.)
IIoT systems that implement both HSR and PRP protocols increase network system reliability and provide greater safety. Both of these characteristics are highly desirable in IIoT network systems.
The rest of the article describes SoC-e’s HSR/PRP Switch IP, which is implemented in the PL (programmable Logic) of a Zynq SoC contained on the Avnet MicroZed SOM that’s part of the Avnet MicroZed I4EK.
For more information about SoC-e’s HSR/PRP reference design and IP, click here.
Although humans once served as the final inspectors for pcbs, today’s component dimensions and manufacturing volumes mandate the use of camera-based automated optical inspection (AOI) systems. Amfax has developed a 3D AOI system—the a3Di—that uses two lasers to make millions of 3D measurements with better than 3μm accuracy. One of the company’s customers uses an a3Di system to inspect 18,000 assembled pcbs per day.
Up and downstream SMEMA (Surface Mount Equipment Manufacturers Association) conveyor control
Operator manual controls for width PCB control
System emergency stop
The system provides height-graded images like this:
3D Image of a3Di’s Measurement Data: Colors represent height, with Z resolution down to less than a micron. The blue section at the top indicates signs of board warp. Laser etched component information appears on some of the ICs.
The a3Di system then compares this image against a stored golden reference image to detect manufacturing defects.
Amfax says that it has found the CompactRIO system to be “CompactRIO system has proven to be a dependable, reliable, and cost-effective.” In addition, the company found it could get far better timing resolution with the CompactRIO system than the 1msec resolution usually provided by PLC controllers.
This project was a 2017 NI Engineering Impact Award Finalist in the Electronics and Semiconductor category last month at NI Week. It is documented in this NI case study.
Hyundai Heavy Industries (HHI) is the world’s foremost shipbuilding company and the company’s Engine and Machinery Division (HHI-EMD) is the world’s largest marine diesel engine builder. HHI’s HiMSEN medium-sized engines are four-stroke diesels with output power ranging from 960kW to 25MW. These engines power electric generators on large ships and serve as the propulsion engine on medium and small ships. HHI-EMD is always developing newer, more fuel-efficient engines because the fuel costs for these large diesels runs about $2000/hour. Better fuel efficiency will significantly reduce operating costs and emissions.
For that research, HHI-EMD developed monitoring and diagnostic equipment to better understand engine combustion performance and an HIL system to test new engine controller designs. The test and HIL systems are based on equipment from National Instruments (NI).
Engine instrumentation must be able to monitor 10-cylinder engines running at thousands of RPM while measuring crankshaft angle to 0.1 degree of resolution. From that information, the engine test and monitoring system calculates in-cylinder peak pressure, mean effective pressure, and cycle-to-cycle pressure variation. All this must happen every 10 μ sec for each cylinder.
HHI-EMD elected to use an NI cRIO-9035 Controller, which incorporates a Xilinx Kintex-7 70T FPGA, to serve as the platform for developing its HiCAS test and data-acquisition system. The HiCAS system monitors all aspects of the engine under test including engine speed, in-cylinder pressure, and pressures in the intake and exhaust systems. This data helped HHI-EMD engineers analyze the engine’s overall performance and the performance of key parts using thermodynamic analysis. HiCAS provides real-time analysis of dynamic data including:
In-cylinder peak pressure
Indicated mean effective pressure and cycle-to-cycle variation
Cyclic moving parts fault diagnosis
Using the collected data, the engineering team then developed a model of the diesel engine, resulting in the development of an HMI system used to exercise the engine controllers. This engine model runs in real time on an NI PXI system synchronized with the high-speed signal-sensor simulation software running on the PXI system’s multifunction FPGA-based FlexRIO module. The HMI system transmits signals to the engine controllers, simulating an operating engine and eliminating the operating costs of a large diesel engine during these tests. HHI-EMD credits the FPGAs in these systems for making the calculations run fast enough for real-time simulation. The simulated engine also permits fault testing without the risk of damaging an actual engine. Of course, all of this is programmed using NI’s LabVIEW systems engineering software and LabVIEW FPGA.
HHI-EMD HIL Simulator for Marine Diesel Engines
According to HHI-EMD, development of the HiCAS engine-monitoring system and virtual verification based on the HIL system shortened development time from more than three years to one, significantly accelerating the time-to-market for HHI-EMD’s more eco-friendly marine diesel engines.
This project was a 2017 NI Engineering Impact Award Finalist in the Transportation and Heavy Equipment category last month at NI Week and won the 2017 HPE Edgeline Big Analog Data Award. It is documented in this NI case study.
Many engineers in Canada wear the Iron Ring on their finger, presented to engineering graduates as a symbolic, daily reminder that they have an obligation not to design structures or other artifacts that fail catastrophically. (Legend has it that the iron in the ring comes from the first Quebec Bridge—which collapsed during its construction in 1907—but the legend appears to be untrue.) All engineers, whether wearing the Canadian Iron Ring or not, feel an obligation to develop products that do not fail dangerously. For buildings and other civil engineering works, that usually means designing structures with healthy design margins even for worst-case projected loading. However, many structures encounter worst-case loads infrequently or never. For example, a sports stadium experiences maximum loading for perhaps 20 or 30 days per year, for only a few hours at a time when it fills with sports fans. The rest of the time, the building is empty and the materials used to ensure that the structure can handle those loads are not needed to maintain structural integrity.
The total energy consumed by a structure over its lifetime is a combination of the energy needed to mine and fabricate the building materials and to build the structure (embodied energy) and the energy needed to operate the building (operational energy). The resulting energy curve looks something like this:
For completely passive structures, which describes most structures built over the past several thousand years, embodied energy dominates the total consumed energy because structural members must be designed to bear the full design load at all times. Alternatively, a smart structure with actuators that stiffen the structure only when needed will require more operational energy but the total required embodied energy will be smaller. Looking at the above conceptual graph, a well-designed active-passive system minimizes the total required energy for the structure.
Active control has already been used in structure design, most widely for vibration control. During his doctorate work, Gennaro Senatore formulated a new methodology to design adaptive structures. His research project was a collaboration between the University College London and Expedition Engineering. As part of that project, Senatore built a large scale prototype of an active-passive structure at the University College London structures laboratory. The resulting prototype is a 6m cantilever spatial truss with a 37.5:1 span-to-depth ratio. Here’s a photo of the large-scale prototype truss:
You can see the actuators just beneath the top surface of the truss. When the actuators are not energized, the cantilever truss flexes quite a lot with a load placed at the extreme end. However, this active system detects the load-induced flexion and compensates by energizing the actuators and stiffening the cantilever.
Here’s a photo showing the amount of flex induced by a 100kg load at the end of the cantilever without and with energized actuators:
The top half of the image shows that the truss flexes 170mm under load when the actuators are not energized, but only 2mm when the system senses the load and energizes the linear actuators.
The truss incorporates ten linear electric actuators that stiffen the truss when sensors detect a load-induced deflection. The control system for this active-passive truss consists of a National Instruments (NI) CompactRIO cRIO-9024 controller, 45 strain-gage sensors, 10 actuators, and five driver boards (one for each actuator pair.) The NI cRIO-9024 controller pairs with a card cage that accepts I/O modules and incorporates a Virtex-5 FPGA for reconfigurable I/O. (That’s what the “RIO” in cRIO stands for.) In this application, the integral Virtex-5 FPGA also provides in-line processing for acquired and generated signals.
A large structure would require many such subsystems, all communicating through a network. This is clearly one very useful way to employ the IIoT in structures.
This project was a 2017 NI Engineering Impact Award Finalist in the Industrial Machinery and Control category last month at NI Week. It is documented in this NI case study, which includes many more technical details and a short video showing the truss in action as a load is applied.
When someone asks where Xilinx All Programmable devices are used, I find it a hard question to answer because there’s such a very wide range of applications—as demonstrated by the thousands of Xcell Daily blog posts I’ve written over the past several years.
Now, there’s a 5-minute “Powered by Xilinx” video with clips from several companies using Xilinx devices for applications including:
Machine learning for manufacturing
Autonomous cars, drones, and robots
Real-time 4K, UHD, and 8K video and image processing
VR and AR
High-speed networking by RF, LED-based free-air optics, and fiber
Last week in Austin on the NI Week exhibit floor, you could see a pair of slot cars racing around a moderately sized track while avoiding obstacles, with real-time position sensing and control managed by TSN-enabled National Instruments (NI) cRIO-9035 8-slot CompactRIO controllers communicating through a Cisco IE-4000 Series Managed Industrial Ethernet Switch. (TSN is “Time Sensitive Networking,” the IEEE Ethernet standard for deterministic packet transmission and handling.) NI’s cRIO-9035 CompactRIO controllers pair an Intel Atom 32-bit processor with a Xilinx Kintex-7 FPGA to implement highly responsive, real-time control obtainable only through the speed of an FPGA’s programmable hardware.
Here’s a video of the TSN slot car system in action with a clear (and graphic) explanation of TSN’s advantages in the real world:
Can we talk? About security? You know that it’s a dangerous world out there. For a variety of reasons, bad actors want to steal your data, or steal your customers’ data, or disrupt operations. Your job is not only to design something that works; these days, you also need to design equipment that resists hacking and tampering. PFP Cybersecurity provides IP that helps you create systems that have robust defenses against such exploits.
“PFP” stands for “power fingerprinting,” which combines AI and analog power analysis to create high-speed, next-generation cyber protection that can detect tampering in milliseconds instead of days, weeks, or months. It does this by observing the tiny changes to a system’s power consumption during normal operation, learning what’s normal, and then monitoring power consumption to detect an abnormal situation that might signal tampering.
The 3-minute video below discusses these aspects of PFP Cybersecurity’s IP and also discusses why the Xilinx Zynq SoC and Zynq UltraScale+ MPSoC are a perfect fit for this security IP. The Zynq device families can all perform high-speed signal processing, have built-in analog conversion circuitry for measuring voltage and current, and can implement high-performance machine-learning algorithms for analyzing power usage.
Originally, PFP Cybersecurity designed a monitoring system based on the Zynq SoC for monitoring other systems but, as the video discusses, if the system is already based on a Zynq device, it can monitor itself and return itself to a known good state if tampering is suspected.
Plethora IIoT develops cutting‑edge solutions to Industry 4.0 challenges using machine learning, machine vision, and sensor fusion. In the video below, a Plethora IIoT Oberon system monitors power consumption, temperature, and the angular speed of three positioning servomotors in real time on a large ETXE-TAR Machining Center for predictive maintenance—to spot anomalies with the machine tool and to schedule maintenance before these anomalies become full-blown faults that shut down the production line. (It’s really expensive when that happens.) The ETXE-TAR Machining Center is center-boring engine crankshafts. This bore is the critical link between a car’s engine and the rest of the drive train including the transmission.
Plethora uses Xilinx Zynq SoCs and Zynq UltraScale+ MPSoCs as the heart of its Oberon system because these devices’ unique combination of software-programmable processors, hardware-programmable FPGA fabric, and programmable I/O allow the company to develop real-time systems that implement sensor fusion, machine vision, and machine learning in one device.
Initially, Plethora IIoT’s engineers used the Xilinx Vivado Design Suite to develop their Zynq-based designs. Then they discovered Vivado HLS, which allows you to take algorithms in C, C++, or SystemC directly to the FPGA fabric using hardware compilation. The engineers’ first reaction to Vivado HLS: “Is this real or what?” They discovered that it was real. Then they tried the SDSoC Development Environment with its system-level profiling, automated software acceleration using programmable logic, automated system connectivity generation, and libraries to speed programming. As they say in the video, “You just have to program it and there you go.”
Here’s the video:
Plethora IIoT is showcasing its Oberon system in the Industrial Internet Consortium (IIC) Pavilion during the Hannover Messe Show being held this week. Several other demos in the IIC Pavilion are also based on Zynq All Programmable devices.
What do you do if you want to build a low-cost state-of-the-art, experimental SDR (software-defined radio) that’s compatible with GNURadio—the open-source development toolkit and ecosystem of choice for serious SDR research? You might want to do what Lukas Lao Beyer did. Start with the incredibly flexible, full-duplex Analog Devices AD9364 1x1 Agile RF Transceiver IC and then give it all the processing power it might need with an Artix-7 A50T FPGA. Connect these two devices on a meticulously laid out circuit board taking all RF-design rules into account and then write the appropriate drivers to fit into the GNURadio ecosystem.
Sounds like a lot of work, doesn’t it? It’s taken Lukas two years and four major design revisions to get to this point.
On April 11, the third, free Webinar in Xilinx's "Precise, Predictive, and Connected Industrial IoT" series will provide insight into the role of Zynq All Programmable SoCs in the breath of applications across IIoT Edge and the connectivity between them. A brief summary of IIoT trends will be presented, followed by an overview of the Data Distribution Service (DDS) IIoT databus standard presented by RTI, the IIoT Connectivity Company, and how DDS and OPC-UA target different connectivity challenges in IIoT systems.
Webinar attendees will also learn:
The main benefits of data-centric communications using DDS including Reliability, Security, Real-Time Response, and Ease of Integration.
When and how to connect DDS systems to OPC-UA systems for the most intelligent and innovative distributed Industrial IoT and Industry 4.0 systems.
How Xilinx’s All Programmable Industrial Control System (APICS) integrates DDS and OPC-UA.
Adam Taylor and Xilinx’s Sr. Product Manager for SDSoC and Embedded Vision Nick Ni have just published an article on the EE News Europe Web site titled “Machine learning in embedded vision applications.” That title’s pretty self-explanatory, but there are a few points I’d like to highlight. Then you can go read the full article yourself.
As the article states, “Machine learning spans several industry mega trends, playing a very prominent role within not only Embedded Vision (EV), but also Industrial Internet of Things (IIoT) and Cloud Computing.” In other words, if you’re designing products for any embedded market, you might well find yourself at a competitive disadvantage if you’re not adding machine-learning features to your road map.
This article closely ties machine learning with neural networks (including Feed-forward Neural Networks (FNNs), Recurrent Neural Networks (RNNs), and Deep Neural Networks (DNNs), and Convolutional Neural Networks (CNNs)). Neural networks are not programmed; they’re trained. Then, if they’re part of an embedded design, they’re deployed. Training is usually done using floating-point neural-network implementations but, for efficiency (power and cost), deployed neural networks can use fixed-point representations with very little or no loss of accuracy. (See “Counter-Intuitive: Fixed-Point Deep-Learning Inference Delivers 2x to 6x Better CNN Performance with Great Accuracy.”)
The programmable logic inside of Xilinx FPGAs, Zynq SoCs, and Zynq UltraScale+ MPSoCs is especially good at implementing fixed-point neural networks, as described in this article by Nick Ni and Adam Taylor. (Go read the article!)
Meanwhile, this is a good time to remind you of the recent Xilinx introduction of the reVISION stack for neural network development using Xilinx All Programmable devices. For more information about the Xilinx reVISION stack, see:
In yesterday’s EETimes article titled “How will Ethernet go real-time for industrial networks?,” author Richard Wilson interviews National Instruments’ Global Technology and Marketing Director Rahman Jamal about using OPC-UA (the OPC Foundation’s Unified Architecture) and TSN (time-sensitive networking) to build industrial Ethernet networks (IIoT/Industrie 4.0) that deliver real-time response. (Yes, yes, yes, “real-time” is a loosely defined term where “real” depends on your system’s temporal reality.) As Jamal states in the interview, some constrained industrial Ethernet network topologies need no help to achieve real-time operation. In other cases and for other topologies, you need Ethernet implementations that are “heavily modified at the hardware level to achieve performance.”
One of the hardware additions that can really help is the hardware implementation of the IEEE 1588v2 PTP (Precision Time Protocol) clock-synchronization standard. PTP permits each piece of network-connected equipment to be synchronized using a 64-bit timer, which can be used for time-stamping, synchronization, control and as a common time reference to implement TSN.
PTP implementation is an ideal task for an IP block instantiated in programmable logic (see last year’s Xcell Daily blog post “Intelligent Gateways Make a Factory Smarter,” written by SoC-e (System on Chip engineering) founder and CEO Armando Astarloa). SoC-e has implemented just such an IEEE 1588v2 PTP IP core in a Xilinx Zynq SoC, which is the core logic device inside of the company’s CPPS-Gate40 Sensor intelligent IIoT gateway. (Note: Software PTP implementations are neither fast nor deterministic enough for many IIoT applications.)
SoC-e CPPS-Gate40 Sensor intelligent IIoT gateway
You can see the SoC-e PTP IP core in the very center of this CPPS-Gate40 block diagram:
According to the SoC-e Web page, the company’s IEEE 1588v2 IP core in the CPPS-Gate40 Sensor gateway can deliver sub-microsecond network synchronization. How is such a small number possible? As Jamal says in his EETimes’ interview, “bit times (time on the wire) for a 64-byte frame at GigE rates is 512ns.” That’s how.
I did not go to Embedded World in Nuremberg this week but apparently SemiWiki’s Bernard Murphy was there and he’s published his observations about three Zynq-based reference designs that he saw running in Aldec’s booth on the company’s Zynq-based TySOM embedded dev and prototyping boards.
“At the show, Aldec provided insight into using the solution to model the ARM core running in QEMU, together with a MIPI CSI-2 solution running in the FPGA. But Aldec didn’t stop there. They also showed off three reference designs designed using this flow and built on their TySOM boards.
“The first reference design targets multi-camera surround view for ADAS (automotive – advanced driver assistance systems). Camera inputs come from four First Sensor Blue Eagle systems, which must be processed simultaneously in real-time. A lot of this is handled in software running on the Zynq ARM cores but the computationally-intensive work, including edge detection, colorspace conversion and frame-merging, is handled in the FPGA. ADAS is one of the hottest areas in the market and likely to get hotter since Intel just acquired Mobileye.
“The next reference design targets IoT gateways – also hot. Cloud interface, through protocols like MQTT, is handled by the processors. The gateway supports connection to edge devices using wireless and wired protocols including Bluetooth, ZigBee, Wi-Fi and USB.
“Face detection for building security, device access and identifying evil-doers is also growing fast. The third reference design is targeted at this application, using similar capabilities to those on the ADAS board, but here managing real-time streaming video as 1280x720 at 30 frames per second, from an HDR-CMOS image sensor.”
“So yes, Aldec put together a solution combining their simulator with QEMU emulation and perhaps that wouldn’t justify a technical paper in DVCon. But business-wise they look like they are starting on a much bigger path. They’re enabling FPGA-based system prototype and build in some of the hottest areas in systems today and they make these solutions affordable for design teams with much more constrained budgets than are available to the leaders in these fields.”
This week, EETimes’ Junko Yoshida published an article titled “Xilinx AI Engine Steers New Course” that gathers some comments from industry experts and from Xilinx with respect to Monday’s reVISION stack announcement. To recap, the Xilinx reVISION stack is a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference.
As Xilinx Senior Vice President of Corporate Strategy Steve Glaser tells Yoshida, “Xilinx designed the stack to ‘enable a much broader set of software and systems engineers, with little or no hardware design expertise to develop, intelligent vision guided systems easier and faster.’”
“While talking to customers who have already begun developing machine-learning technologies, Xilinx identified ‘8 bit and below fixed point precision’ as the key to significantly improve efficiency in machine-learning inference systems.”
Yoshida also interviewed Karl Freund, Senior Analyst for HPC and Deep Learning at Moor Insights & Strategy, who said:
“Artificial Intelligence remains in its infancy, and rapid change is the only constant.” In this circumstance, Xilinx seeks “to ease the programming burden to enable designers to accelerate their applications as they experiment and deploy the best solutions as rapidly as possible in a highly competitive industry.”
She also quotes Loring Wirbel, a Senior Analyst at The Linley group, who said:
“What’s interesting in Xilinx's software offering, [is that] this builds upon the original stack for cloud-based unsupervised inference, Reconfigurable Acceleration Stack, and expands inference capabilities to the network edge and embedded applications. One might say they took a backward approach versus the rest of the industry. But I see machine-learning product developers going a variety of directions in trained and inference subsystems. At this point, there's no right way or wrong way.”
There’s a lot more information in the EETimes article, so you might want to take a look for yourself.