Today, Xilinx announced a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference. It’s called the reVISION stack and it allows design teams without deep hardware expertise to use a software-defined development flow to combine efficient machine-learning and computer-vision algorithms with Xilinx All Programmable devices to create highly responsive systems. (Details here.)
The Xilinx reVISION stack includes a broad range of development resources for platform, algorithm, and application development including support for the most popular neural networks: AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN. Additionally, the stack provides library elements such as pre-defined and optimized implementations for CNN network layers, which are required to build custom neural networks (DNNs and CNNs). The machine-learning elements are complemented by a broad set of acceleration-ready OpenCV functions for computer-vision processing.
For application-level development, Xilinx supports industry-standard frameworks including Caffe for machine learning and OpenVX for computer vision. The reVISION stack also includes development platforms from Xilinx and third parties, which support various sensor types.
The reVISION development flow starts with a familiar, Eclipse-based development environment; the C, C++, and/or OpenCL programming languages; and associated compilers all incorporated into the Xilinx SDSoC development environment. You can now target reVISION hardware platforms within the SDSoC environment, drawing from a pool of acceleration-ready, computer-vision libraries to quickly build your application. Soon, you’ll also be able to use the Khronos Group’s OpenVX framework as well.
For machine learning, you can use popular frameworks including Caffe to train neural networks. Within one Xilinx Zynq SoC or Zynq UltraScale+ MPSoC, you can use Caffe-generated .prototxt files to configure a software scheduler running on one of the device’s ARM processors to drive CNN inference accelerators—pre-optimized for and instantiated in programmable logic. For computer vision and other algorithms, you can profile your code, identify bottlenecks, and then designate specific functions that need to be hardware-accelerated. The Xilinx system-optimizing compiler then creates an accelerated implementation of your code, automatically including the required processor/accelerator interfaces (data movers) and software drivers.
The Xilinx reVISION stack is the latest in an evolutionary line of development tools for creating embedded-vision systems. Xilinx All Programmable devices have long been used to develop such vision-based systems because these devices can interface to any image sensor and connect to any network—which Xilinx calls any-to-any connectivity—and they provide the large amounts of high-performance processing horsepower that vision systems require.
Initially, embedded-vision developers used the existing Xilinx Verilog and VHDL tools to develop these systems. Xilinx introduced the SDSoC development environment for HLL-based design two years ago and, since then, SDSoC has dramatically and successfully shorted development cycles for thousands of design teams. Xilinx’s new reVISION stack now enables an even broader set of software and systems engineers to develop intelligent, highly responsive embedded-vision systems faster and more easily using Xilinx All Programmable devices.
And what about the performance of the resulting embedded-vision systems? How do their performance metrics compare against against systems based on embedded GPUs or the typical SoCs used in these applications? Xilinx-based systems significantly outperform the best of this group, which employ Nvidia devices. Benchmarks of the reVISION flow using Zynq SoC targets against Nvidia Tegra X1 have shown as much as:
6x better images/sec/watt in machine learning
42x higher frames/sec/watt for computer-vision processing
1/5th the latency, which is critical for real-time applications
There is huge value to having a very rapid and deterministic system-response time and, for many systems, the faster response time of a design that's been accelerated using programmable logic can mean the difference between success and catastrophic failure. For example, the figure below shows the difference in response time between a car’s vision-guided braking system created with the Xilinx reVISION stack running on a Zynq UltraScale+ MPSoC relative to a similar system based on an Nvidia Tegra device. At 65mph, the Xilinx embedded-vision system’s response time stops the vehicle 5 to 33 feet faster depending on how the Nvidia-based system is implemented. Five to 33 feet could easily mean the difference between a safe stop and a collision.
The last two years have generated more machine-learning technology than all of the advancements over the previous 45 years and that pace isn't slowing down. Many new types of neural networks for vision-guided systems have emerged along with new techniques that make deployment of these neural networks much more efficient. No matter what you develop today or implement tomorrow, the hardware and I/O reconfigurability and software programmability of Xilinx All Programmable devices can “future-proof” your designs whether it’s to permit the implementation of new algorithms in existing hardware; to interface to new, improved sensing technology; or to add an all-new sensor type (like LIDAR or Time-of-Flight sensors, for example) to improve a vision-based system’s safety and reliability through advanced sensor fusion.
It’s amazing what you can do with a few low-cost video cameras and FPGA-based, high-speed video processing. One example: the Virtual Flying Camera that Xylon has implemented with just four video cameras and a Xilinx Zynq Z-7000 SoC. This setup gives the driver a flying, 360-degree view of a car and its surroundings. It’s also known as a bird’s-eye view, but in this case the bird can fly around the car.
Many such implementations of this sort of video technology use GPUs for the video processing, but Xylon uses the programmable logic in the Zynq SoC using custom hardware designed with Xylon logicBRICKS IP cores. The custom hardware implemented in the Zynq SoC’s programmable logic enables very fast execution of complex video operations including camera lens-distortion corrections, video frame grabbing, video rotation, perspective changes, as well as the seamless stitching of four processed video streams into a single display output—and all this occurs in real time. This design approach assures the lowest possible video processing delay at significantly lower power consumption when compared to GPU-based implementations.
A Xylon logi3D Scalable 3D Graphics Controller soft-IP core—also implemented in the Zynq SoC’s programmable logic—renders a 3D vehicle and the surrounding view on the driver’s information display. The Xylon Surround View system permits real-time 3D image generation even in programmable SoCs without an on-chip GPU, as long as there’s programmable logic available to implement the graphics controller. The current version of the Xylon ADAS Surround View Virtual Flying Camera system runs on the Xylon logiADAK Automotive Driver Assistance Kit that is based on the Xilinx Zynq-7000 All Programmable SoC.
Here’s a 2-minute video of the Xylon Surround View system in action:
If you’re attending the CAR-ELE JAPAN show in Tokyo next week, you can see the Xylon Surround View system operating live in the Xilinx booth.
A high-end surround-view system employing sensor fusion by Xylon
A deep-learning system based on a CNN (Convolutional Neural Networks) running on a Zynq UltraScale+ MPSoC
The Zynq UltraScale+ MPSoC and original Zynq SoC offer a unique mix of ARM 32- and 64-bit processors with the heavy-duty processing you get from programmable logic, needed to process and manipulate video and to fuse data from a variety of sensors such as video and still cameras, radar, lidar, and sonar to create maps of the local environment.
If you are developing any sort of sensor-based electronic systems for future automotive products, you might want to come by the Xilinx booth (E35-38) to see what’s already been explored. We’re ready to help you get a jump on your design.
The four video-camera feeds appear choppy in the demo until the FPGA-based acceleration is turned on. At that point, the four video feeds appear on screen in real time with corner-detection annotation added at the full frame rate, thanks to the FPGA-based video processing.
The Xilinx table in Maker’s Alley at Sparkfun AVC 2016
In case you are not familiar with the Sparkfun AVC, it’s an autonomous vehicle competition and this year, there were two classes of autonomous vehicle: Classic and Power Racing. The Classic class vehicle was about the size of an R/C car and raced on an appropriately sized track with hazards including the Discombobulator (a gasoline-powered turntable), a ball pit, hairpin turns, and an optional dirt-track shortcut. The Power Racing class is based on kid’s Power Wheels vehicles, which are sized to be driven by young kids but in this race were required to be carrying adults. There were races for both autonomous and human-driven Power Racers.
Here’s a video of one of the Sparkfun AVC Classic races getting off to a particularly rocky start:
Here’s a short video of an Autonomous Power Racing race, getting off to an equally disastrous start:
And here’s a long video of an entire, 30-lap, human-driven Power Racing race:
On Saturday, September 17, you’ll be able to get one of 50 free license vouchers for the Xilinx SDSoC Development Environment, which we’re pre-loading along with Vivado HL on a USB drive so you won’t even need to download the software. (Worth $995!)
At the Xilinx Tent in Maker Alley, part of Sparkfun’s 8th annual Autonomous Vehicle Competition (AVC) in Niwot, Colorado. (That’s between Boulder and Longmont if you don’t know about Google Maps.)
There’s one tiny catch. You need an admission ticket to get in.
How much? Early bird AVC tickets are on sale here for $6. Admission at the door on the day of the AVC is $8. That’s a tiny, tiny price for a full day of entertainment watching autonomous vehicles race against time while fighting robots maul or burn each other to a cinder.
However, there’s a way to knock another buck off the already low, low early bird admission price; there’s the secret discount code: SFEFRIENDS.
See you in Niwot. Wear your asbestos underpants.
For more information about the Sparkfun AVC and the Xilinx SDSoC giveaway, see:
Xilinx will be attending this year’s Sparkfun AVC (Autonomous Vehicle Competition) in Colorado on September 17. Haven’t heard about the Sparkfun AVC? Incredibly, this is its eighth year and there are four different competitions this year:
Classic—A land-based speed course for autonomous vehicles weighing less than 25 pounds. Beware the discombobulator and avoid the ball pit of despair!
PRS—Power racing series based on the battery-powered kiddie ride-‘em toys
A+PRS—The autonomous version of the PRS competition
Robot Combat Arena—Ant-weight and beetle-weight robots fight to the death. Note: Fire is allowed. "We like fire."
Sparkfun’s AVC is taking place in the Sparkfun parking lot. Sparkfun is located in beautiful Niwot, Colorado. Where’s that? On the Diagonal halfway between Boulder and Longmont, of course.
Haven’t heard of Sparkfun? They’re an online electronics retailer at the epicenter of the maker movement. Sparkfun’s Web site is chock full of tutorials and just-plain-weird videos for all experience levels from beginner to engineer. I’m a regular viewer of the company’s Friday new-product videos. Also a long-time customer.
Xilinx will be exhibiting an embedded-vision demo in Maker’s Alley tent at AVC this year because Xilinx All Programmable devices like the Zynq-7000 SoC and Zynq UltraScale+ MPSoC give you a real competitive advantage when developing a quick, responsive autonomous vehicle.
If you are entering this year’s AVC and are using Xilinx All Programmable devices in your vehicle, please let me know in the comments below or come to see us in the tent at the event. We want to help make you famous for your effort!
Here’s an AVC video from Sparkfun to give you a preview of the AVC:
Xylon has introducedlogiADAK 3.2, the latest version of the company’s ADAS toolset for the Xilinx Zynq-7000 SoC. This new release includes a new toolset for driver drowsiness detection based on facial movements monitored through a camera placed in a vehicle cabin and significantly expanded and improved forward camera collision avoidance ADAS based on detection and recognition of vehicles, pedestrians and bikes. The current logiADAK kit includes around ten different ADAS applications, ranging from design frameworks to complete, production-ready solutions that help you create highly differentiated driver assistance applications.
If you’re not yet familiar with Xylon’s logiADAK toolkit, here’s Xilinx’s Aaron Behman with a quick, 90-second video demo shot at the recent Embedded Vision Summit:
If you look at what’s happening with Moore’s Law (just read any article about the topic during the last two years), you see that systems design is being forced to make use of All Programmable devices at an increasing rate because of the enormous NRE costs associated with roll-your-own ASICs at 16nm, 10nm, and below. Companies still need the differentiation afforded by custom hardware to boost product margins in their competitive, global marketplaces, but they need to get it in a different way.
Nowhere is that more true than in the six Megatrends that Xilinx has identified:
Truthfully, I didn’t write that headline. It’s the title of yesterday’s Frost & Sullivan press release awarding Xilinx the 2016 North American Frost & Sullivan Award for Product Leadership, based on the consulting firm’s recent analysis of the automotive programmable logic devices market for advanced driver assistance systems (ADAS). The press release continues: “Xilinx is uniquely positioned to cater to current and future market needs.”
To date, you’ve seen very little in the Xcell Daily blog about Xilinx and ADAS systems, not because Xilinx isn’t working closely with automotive Tier 1 suppliers and OEMs on ADAS systems but because those companies really have not wanted any publicity about that highly competitive work and so I could not write about the many, many design wins. In reality, more than 20 of these automotive suppliers and OEMs have been working with Xilinx on ADAS designs over the last few years.
The subhead of the Frost & Sullivan press release captures the reality of this effort:
”Superior product value has made Xilinx’s devices the preferred choice for current and evolving ADAS modules among global OEMs.”
And, since I’m already quoting from this Frost & Sullivan press release, let me add this quote:
“The company has strong technical capabilities and a successful track record in multiple sensor applications that include radar, light detection and ranging (LIDAR), and camera systems, all of which give it an edge over competing system on chip (SoC) suppliers,” said Frost & Sullivan Industry Analyst, Arunprasad Nandakumar. “Xilinx’s Zynq UltraSCALE+ multiprocessor SoC (MPSoC), scores high on scalability, modularity, reliability, and quality.”
“Xilinx adheres to self-defined standards that exceed industry requirements. Its FPGAs and PLDs are far ahead of the baseline defined by AEC-Q100, which is the standard stress test qualification requirement for electronic components used in automotive applications. In fact, Xilinx has introduced its own Beyond AEC-Q100 testing that characterizes its robust XA family of products.”
And this final quote sums it up:
“In recognition of its strong product portfolio, which is aligned perfectly with the vision of automated driving, Xilinx receives the 2016 North American Frost & Sullivan Product Leadership Award. Each year, this award is presented to the company that has developed a product with innovative features and functionality, gaining rapid acceptance in the market. The award recognizes the quality of the solution and the customer value enhancements it enables.
“Frost & Sullivan’s Best Practices Awards recognize companies in a variety of regional and global markets for outstanding achievement in areas such as leadership, technological innovation, customer service, and product development. Industry analysts compare market participants and measure performance through in-depth interviews, analysis, and extensive secondary research.”
Would you like to see the results of those in-depth interviews, analysis, and extensive secondary research? Thought you might.
There’s a companion 12-page Frost & Sullivan research paper attached to this blog. Just click below.
I’ve written previously about Apertus, the Belgian company behind the AXIOM open-source 4K cinema camera effort. (See below.) I met with two of the Apertus principals, Sebastian Pichelhofer and Herbert Pötzl, at last month’s Embedded World 2016 in Nuremberg. They carry the coolest business cards I’ve seen in a long, long time:
Pichelhofer and Pötzl were making the rounds at the Embedded World show to talk about their 3rd-generation AXIOM camera, the Gamma. This is the big, modular, pro-level 4K cinema camera that leverages the knowledge gained in the design of the AXIOM Alpha and Beta cameras. Like the earlier cameras, the AXIOM Gamma is based on a CMOSIS imager and a Xilinx Zynq-7000 SoC (a Z-7030). The AXIOM Beta is based on an Avnet MicroZed SOM with a Zynq Z-7020 SoC.
Here’s a closeup photo of the AXIOM Beta’s Image Sensor Module:
AXIOM Beta 4K Cinema Camera Image Sensor Module
And here’s a photo of the back of the AXIOM Beta Image Sensor Module showing the Zynq-based Avnet MicroZed board that’s currently being used:
Back Side of AXIOM Gamma 4K Cinema Camera Image Sensor Module showing Avnet MicroZed SOM
The AXIOM Beta is currently operational and the gents from Apertus directed me to the Antmicro booth at the show to see a working model. Here’s a photo from the Antmicro booth:
A working AXIOM Beta 4K camera in the Antmicro booth
Antmicro, located in Poland, is a partner working with Apertus on the AXIOM camera. Although I didn’t see it at Embedded World, here’s a photo of the AXIOM Gamma Image Sensor Module prototype from the Antmicro Web site:
AXIOM Gamma 4K Cinema Camera Image Sensor Module
While at the Antmicro booth, I met team leader Karol Gugala, who impressed me with his knowledge of the Zynq-7000 SoC. He’s already developed several Zynq-based projects including a distance-measuring system for an autonomous mining vehicle based on stereo video imagers. Here’s a photo of that project taken at the Antmicro booth:
Although we spoke for only 10 minutes or so, I was really impressed with Gugala’s knowledge and his considerable experience with the Zynq-7000 SoC. I immediately dubbed him “King of Zynq,” in my mind at least. Antmicro is currently working with Apertus on the AXIOM Gamma design and I can hardly wait to see what this international team produces.
Earlier Xcell Daily blog posts about the AXIOM 4K cinema cameras:
Xylon’s logiADAK Automotive Driver Assistance Kit and logiRECORDER Multi-Channel Video Recording ADAS Kit provide you with a number of essential building blocks needed to develop your own vision-based ADAS (advanced driver assist system) systems based on the Xilinx Zynq SoC for a wide range of vehicle designs. The logiADAK kit comes with a full set of DA demo applications, customizable reference SoC designs, software drivers, libraries, and documentation. The logiRECORDER kit includes hardware and software necessary for synchronous video recording of up to six uncompressed video streams from Xylon video cameras.
Xylon has just published a short video showing these kits in action:
The CAR-ELE show for automotive OEMs and Tier 1 suppliers kicked off at Tokyo Big Sight in Japan today and I received this image of an RC car equipped with five video cameras and a Zynq SoC from Naohiro Jinbo at the Xilinx booth:
The image shows a transparent-bodied RC car equipped with the five video cameras facing off against four pedestrians and two other vehicles towards the bottom of the image. You can also see two screen pairs at the top of the booth. The left screen in the rightmost screen pair shows a bird’s-eye view around the RC car. That image is a real-time fusion of the five video streams from the cameras on the RC car. The other screen in the rightmost pair shows real-time object detection in action. Pedestrians are highlighted in bounding boxes. Both screens are generated live by the car’s on-board Zynq SoC and both of these demos rely on the programmable logic in the Zynq SoC to perform the heavy lifting required by the real-time video processing.
This 5-Camera ADAS Development Platform demo is being presented by Xylon, eVS (embedded Vision Systems), and DDC (Digital Design Corp). The demo is based on Xylon’s logiADAK Driver Assistance Kit version 3.1, which extends the functionality of the company’s logiADAK platform to include efficient multi-object classification, encompassing vehicle and cyclist detection in addition to pedestrian detection.
"Tokyo Big Sight at Night" by Masato Ohta from Tokyo, Japan. - Flickr. Licensed under CC BY 2.0 via Commons
If news of last week’s ADAS-fest at CES in Las Vegas has peaked your interest in self-driving and assisted-driving technology, you can get up close and personal with that technology by attending this week’s CAR-ELE in Tokyo. Xilinx and its partners will be demonstrating several operational ADAS technologies based on the new Xilinx Zynq UltraScale MPSoC and the battle-tested Zynq-7000 SoC in the Xilinx booth (W8-54).
Among the demos: a 5-Camera ADAS Development Platform presented by Xylon, eVS (embedded Vision Systems), and DDC (Digital Design Corp). The 5-camera demo is based on Xylon’s logiADAK Driver Assistance Kit version 3.1for the Xilinx Zynq-7000 SoC. Xylon’s logiADAK 3.1 extends the functionality of the company’s logiADAK platform to include efficient multi-object classification, encompassing vehicle and cyclist detection in addition to pedestrian detection. The logiADAK kit includes everything you need to install a system on your own vehicle including five sealed megapixel cameras.
In the Xilinx booth at the CAR-ELE show, you’ll see a logiADAK 3.1 platform mounted on a remote-control car that you can drive in “parking lot” installed in the booth.
In the race to develop self-driving cars, ADAS (Advanced Driver Assistance Systems) designs need to account for the human driver’s condition for situations when the human might ask or be required to take over the driving. Xylon has just introduced a new ADAS IP core designed to detect drowsiness and distraction in facial movements of drivers. The logiDROWSINE Driver Drowsiness Detector IP can be integrated into the Xilinx Zynq SoC to monitor facial movements as imaged by a video camera in the vehicle’s cabin. The logiDROWSINE IP core monitors the driver’s eyes, gaze, eyebrows, lips and head and it continuously tracks facial features that can indicate microsleep. It also looks for yawns and other indications of sleepiness. In all, the logiDROWSINE IP core detects recognize seven levels of drowsiness. When the IP determines that the driver appears drowsy, it alerts the associated ADAS system so that proper steps are taken. Such steps might include an audible alert or a vibrating seat.
The logiDROWSINE IP is split between the Zynq SoC’s programmable hardware and software that runs on one of the Zynq SoC’s two ARM Cortex-A9 MPCore processors. The complete driver drowsiness SoC design includes the logiDROWSINE IP core, the logiFDT face-detection and –tracking IP core, and other IP cores. All of this fits into the smallest Xilinx Zynq SoC—the Z-7010. It is prepackaged for the Xilinx Vivado Design Suite and IP deliverables include the software driver, documentation and technical support.
Here’s a short video demo of the logiDROWSINE IP core in action:
There are a lot of awards in our industry and I do not normally blog about them. However, I do make exceptions and the annual Thomson Reuters Top 100 Global Innovators award is one of those exceptions. For the fourth year in a row, Thomson Reuters has named Xilinx in its Top 100 Global Innovators report. Xilinx innovations are directly aimed at helping customers integrate the highest levels of software-based intelligence with hardware optimization and any-to-any connectivity in all applications including those associated with six key Megatrends (5G Wireless, SDN/NFV, Video/Vision, ADAS, Industrial IoT, and Cloud Computing) shaping the world’s industries today.
According to SVP David Brown, Thomson Reuters uses a scientific approach to analyzing metrics including patent volume, application-to-grant success, globalization and citation influence. Consequently, this award is based on objective criteria and is not a popularity contest, which is why I consider it bloggable. That, and Xilinx’s presence on the Top 100 list this year, and in 2012, 2013, and 2014. (Note: The top 100 innovators are not ranked. You’re either on the list—or you’re not. Xilinx is.)
Brown writes in a preface to the report:
“…we’ve developed an objective formula that identifies the companies around the world that are discovering new inventions, protecting them from infringers and commercializing them. This is what we call the “Lifecycle of Innovation:” discovery, protection and commercialization. Our philosophy is that a great idea absent patent protection and commercialization is nothing more than, a great idea.”
“…for five consecutive years the Thomson Reuters Top 100 companies have consistently outperformed other indices in terms of revenue and R&D spend. This year, our Top 100 innovators outperform the MSCI World Index in revenue by 6.01 percentage points and in employment by 4.09 percentage points. We also outperform the MSCI World Index in market-cap-weighted R&D spend by 1.86 percentage points. The conclusion: investment in R&D and innovation results in higher revenue and company success.”
Here’s a video showing Thomson Reuters Senior IP Analyst Bob Stembridge describing the methodology for determining the world’s most innovative companies for this report:
For more information about this fascinating study and report, use the link above and download the report PDF.
FPGA usage has evolved from its early use as glue logic, as reflected in the six Megatrends now making significant use of Xilinx All Programmable devices: 5G Wireless, SDN/NFV, Video/Vision, ADAS, Industrial IoT, and Cloud Computing. Today, you’re just as likely to use one Xilinx All Programmable device to implement a single-chip system because that’s the fastest way to get from concept to working, production systems. Consequently, system-level testing of Xilinx devices has similarly evolved to track these more advanced uses for the company’s products.
If you’d like more information about this new level of testing, a good place to look is page 11 of the just-published 2015 Annual Quality Report from Xilinx. (You just might want to take a look at all of the report’s pages while you’re at it.)
Normally, I would never steer you towards a press-announcement video but I’ve got one that you’re going to want to watch. At the end of this blog you’ll find a 38-minute video of last week’s press announcement, made in conjunction with newly announced partners Xilinx and Mellanox, unveiling Qualcomm’s 64-bit, ARM-based, many-core Server SoC. (See last week’s “Qualcomm and Xilinx Collaborate to Deliver Industry-Leading Heterogeneous Computing Solutions for Data Centers” for details.) The video includes a demo of Qualcomm’s working Server Development Platform.
Six researchers at ETH Zurich have developed a 1kg, autonomous hex-copter they’ve named the AscTec Firefly that uses four stereo-pair cameras to create 3D disparity maps of its surroundings to sense and avoid obstacles in real time. That’s a very useful skill for an autonomous vehicle designed to navigate around people or through a forest, for example. Rather than rely on ultrasound ranging systems or time-of-flight imagers, the AscTec Firefly relies of four stereo camera pairs equipped with ultra-wide-angle lenses. The stereo vision permits the creation of a 3D map of the copter’s surroundings.
By Mike Santarini, Publisher, Xcell Journal, Xilinx
Xilinx has a rich history in the automotive market, but over the last four years and with the commercial launch of the Zynq-7000 All Programmable SoC in 2011, the company has quickly become the platform provider of choice in the burgeoning market for advanced driver assistance systems (ADAS). Companies such as Mercedes-Benz, BMW, Nissan, VW, Honda, Ford, Chrysler, Toyota, Mazda, Acura, Subaru and Audi are among the many OEMs that have placed Xilinx FPGAs and Zynq SoCs at the heart of their most advanced ADAS systems. And with the new Zynq UltraScale+ MPSoCs, Xilinx is sure to play a leadership role in the next phases of automotive electronic innovation: autonomous driving, vehicle-to-vehicle communication and vehicle-to-infrastructure communication.
By Thomas Gage and Jonathan Morris, Marconi Pacific
ADAS makes safety and marketing sense. Whether it is Daimler, Toyota, Ford, Nissan, GM, another vehicle OEM or even Google, none are going to put vehicles on the road that can steer, brake or accelerate autonomously without having confidence that the technology will work. ADAS promises to first reduce accidents and assist drivers as a “copilot” before eventually taking over for them on some and eventually their entire journey as an “autopilot.”
As for how quickly the impacts of this technology will be felt, the adoption curves for any new technology look very similar to one another. For example, the first commercial mobile-phone network went live in the United States in 1983 in the Baltimore-Washington metropolitan area. At the time, phones cost about $3,000 and subscribers were scarce. Even several years later, coverage was unavailable in most of the country outside of dense urban areas. Today there are more mobile-phone subscriptions than there are people in the United States, and more than 300,000 mobile-phone towers connect the entire country. Low-end smartphones cost about $150. Vehicle technology is moving forward at a similar pace.
Six important emerging markets—video/vision, ADAS/autonomous vehicles, Industrial Internet of things, 5G wireless, SDN/NFV and cloud computing—will soonmerge into an omni-interconnected network of networks that will have a far-reaching impact on the world we live in. This convergence of intelligent systems will enrich our lives with smart products that are manufactured in smart factories and driven to us safely in smart vehicles on the streets of smart cities—all interconnected by smart wired and wireless networks deploying services from the cloud.
Xilinx Inc.’s varied and brilliant customer base is leveraging Xilinx All Programmable devices and software-defined solutions to make these new markets and their convergence a reality. Let’s examine each of these emerging markets and take a look at how they are coming together to enrich our world. Then we’ll take a closer look at how customers are leveraging Xilinx devices and software-defined solutions to create smarter, connected and differentiated systems that in these emerging markets to shape a brilliant future for us all.
IT STARTS WITH VISION: Vision systems are everywhere in today’s society. You can find cameras with video capabilities in an ever-growing number of electronic systems, from the cheapest mobile phones to the most advanced surgical robots to military and commercial drones and unmanned spacecraft exploring the universe. In concert, the supporting communications and storage infrastructure is quickly shifting gears from a focus on moving voice and data to an obsession with fast video transfer.
ADAS’ DRIVE TO AUTONOMOUS VEHICLES: If you own or have ridden in an automobile built in the last decade, chances are you have already experienced the value of ADAS technology. Indeed, perhaps some of you wouldn’t be here to read this article if ADAS hadn’t advanced so rapidly. The aim of ADAS is to make drivers more aware of their surroundings and thus better, safer drivers.
IIOT’S EVOLUTION TO THE FOURTH INDUSTRIAL REVOLUTION: The term Internet of Things has received much hype and sensationalism over the last 20 years—so much so that to many, “IoT” conjures up images of a smart refrigerator that notifies you when your milk supply is getting low and the wearable device that receives the “low-milk” notification from your fridge while also fielding texts, tracking your heart rate and telling time. These are all nice-to-have, convenience technologies. But to a growing number of people, IoT means a great deal more. In the last couple of years, the industry has divided IoT into two segments: consumer IoT for convenience technologies (such as nifty wearables and smart refrigerators), and Industrial IoT (IIoT), a burgeoning market opportunity addressing and enabling some truly major, substantive advances in society.
INTERCONNECTING EVERYTHING TO EVERYTHING ELSE: In response to the need for better, more economical network topologies that can efficiently and affordably address the explosion of data-based services required for online commerce and entertainment as well as the many emerging IIoT applications, the communications industry is rallying behind two related network topologies: software-defined networks and network function virtualization.
SECURITY EVERYWHERE: As systems from all of these emerging smart markets converge and become massively interconnected and their functionality becomes intertwined, there will be more entry points for nefarious individuals to do a greater amount of harm affecting a greater amount of infrastructure and greater number of people. The many companies actively participating in bringing these converging smart technologies to market realize the seriousness of ensuring that all access points in their products are secure. A smart nuclear reactor that can be accessed by a backdoor hack of a $100 consumer IoT device is a major concern. Thus, security at all point points in the converging network will become a top priority, even for systems that seemingly didn’t require security in the past.
XILINX PRIMED TO ENABLE CUSTOMER INNOVATION: Over the course of the last 30 years, Xilinx’s customers have become the leaders and key innovators in all of these markets. Where Xilinx has played a growing role in each generation of the vision/video, ADAS, industrial, and wired and wireless communications segments, today its customers are placing Xilinx All Programmable FPGAs, SoCs and 3D ICs at the core of the smarter technologies they are developing in these emerging segments.
Note: This blog post has been excerpted from Mike Santarini’s far more detailed article in the special Megatrends issue of Xcell Journal (Issue 92) that has just been published. To read the full article, click here or download a PDF of the entire issue by clicking here.
The new special issue of Xcell Journal celebrates the ways in which Xilinx customers are enabling a new era of innovation in six key emerging markets: vision/video, ADAS/autonomous vehicles, Industrial IoT, 5G, SDN/NFV and cloud computing. Each of these segments is bringing truly radical new products to our society. And as the technologies advance over the next few years, the six sectors will converge into a network of networks that will bring about substantive changes in how we live our lives daily.
Vision systems are quickly becoming ubiquitous, having long since evolved beyond their initial niches in security, digital cameras and mobile devices. Likewise undergoing rapid and remarkable growth are advanced driver assistance systems (ADAS), which are getting smarter and expanding to enable vehicle-to-vehicle communications (V2V) for autonomous driving and vehicle-to-infrastructure (V2I) communications that will sync vehicles with smart transportation infrastructure to coordinate traffic for an optimal flow through freeways and cities.
These smart vision systems, ADAS and infrastructure technologies form the fundamental building blocks for emerging Industrial Internet of Things (IIoT) markets like smart factories, smart grids and smart cities—all of which will require an enormous amount of wired and wireless network horsepower to function. Cloud computing, 5G wireless and the twin technologies of software-defined networking (SDN) and network function virtualization (NFV) will supply much of this horsepower.
Converged, these emerging technologies will be much greater than the sum of their individual parts. Their merger will ultimately enable smart cities and smart grids, more productive and more profitable smart factories, and safer travel with autonomous driving.
Note: This blog post has been excerpted from the full article in the new Xcell Journal, Issue 92. To read the full article, click here or download a PDF of the entire issue by clicking here.
If you visited Xilinx.com today, you will have noticed a very different representation of Xilinx. The Web site change represents Xilinx’s latest step forward in an ongoing corporate transformation into a new era of offerings. The change also brings focus on six key “Megatrends” that are changing the world we live in:
Xilinx participates in all of these Megatrends and you’ll find a substantial amount of new material about them in the redesigned Xilinx.com Web site. You’ll also discover a significant amount of new information about the design and development solutions that are uniquely Xilinx, based on the company’s All Programmable (hardware, software, I/O programmability) device technology (FPGAs, SoCs, and MPSoCs) and a combination of industry-standard and unique software tools in the growing SDx family of development environments that support rapid, high-level development using Xilinx devices.
You will also discover extensive and intensely interesting coverage of these Megatrends in the latest, just-published edition of Xcell Journal. Click here to read the new edition of Xcell Journal online or here to download the PDF.
Note: If you usually access the Xcell Daily blog using the link on the Xilinx.com home page, it has moved. You’ll now find it under the “About” drop-down tab at the top of every Web page on Xilinx.com. So no matter where you are on the site, Xcell Daily is just a couple of clicks away.
The Apical Spirit engine can create virtualized digital representations of important features in video frames at 30fps from 1080p60 HD video using as many as sixteen classifier models with an unlimited number of objects detected per classifier model. Minimum object size within the video frame is a relatively small 60x60 pixels. The only way to achieve this incredible detection rate is to use multipliers—a lot of multipliers. According to Apical’s VP of Product Applications Judd Heape, the Spirit engine uses 600 of the 900 multipliers in the programmable logic section of a Xilinx Zynq Z-7045 SoC running at 300MHz to operate in real time at the above video and detection frame rates. The design can scale to use more multipliers if more performance is required.
By comparison, a GPU is 30x slower and consumes 10x the power according to Heape. “This is only possible in an FPGA,” he says. No other off-the-shelf part can handle the computation load.
The typical Embedded Vision system must process video frames, extract features from those processed frames, and then make decisions based on the extracted features. Pixel-level tasks can require hundreds of operations per pixel and require hundreds of GOPS (giga operations/sec) when you’re talking about HD or 4K2K video. Contrast that with the frame-based tasks, which “only” require millions of operations per second but the algorithms are more complex. You need a hardware implementation for the pixel-level tasks while fast processors can handle the more complex frame-based tasks. This explanation is how Mario Bergeron, a Technical Marketing engineer from Avnet, launched into his presentation at last week’s Embedded Vision Summit 2015 in Santa Clara, California.
Among the many demos at this week’s Embedded Vision Summit held at the Santa Clara Convention Center was a demonstration of a Zynq-based development workflow using MathWorks’ Simulink and HDL Coder to create a fully operational, real-time pedestrian detector based on the HOG (Histogram of Oriented Gradients) algorithm. The model for this application was developed entirely in MathWorks’ Simulink and the company’s HDL Coder generated the HDL code for implementing the HOG algorithm’s SVM (support vector machine) classifier in the programmable logic section of a Xilinx Zynq SoC. The Xilinx Vivado Design Suite converted the HDL into a hardware implementation for the Zynq SoC.
This design takes real-time HD video, processes the video in the Zynq SoC’s programmable-logic implementation of the SVM classifier, and passes the results back to the Zynq SoC’s dual-core ARM Cortex-A9 MPCore processor, which annotates the video stream and then outputs the result.
Here’s a video of the demo, presented by MathWorks’ Principal Development Engineer Steve Kuznicki at the Embedded Vision Summit:
This week at Embedded World in Nuremberg, Xylon is showing its latest version of a Zynq-based Automotive Driver assistance Kit, the logiADAK Version 3.0, which can combine live video streams from four video cameras to produce a smooth, 360-degree, real-time surround view with 3D and birds-eye viewing modes. The system stitches video streams from four cameras to create the surround view and the system can accommodate a fifth camera dedicated to pedestrian detection or in-cabin face detection and tracking.
The surround view can be used for a variety of automotive applications including lane-departure and blind-spot detection. These applications require intensive real-time video processing, parallel execution of multiple complex algorithms, and flexible interfacing with sensors and the vehicle’s communication backbones. The Xilinx Zynq SoC provides the needed processing, programmable hardware, and I/O capabilities.
Stitching a seamless 360-degree, surround view from four independent video streams requires system calibration and the logiADAK Version 3.0 kit includes a logiOWL Vehicle Self Calibration application that accomplishes multi-camera calibration in as little as 10 seconds using four calibration targets set on the pavement near the vehicle’s four corners:
The resulting bird’s-eye, surround-view image looks like this:
IP for face tracking would seem to have broader application than you might think on – ahem – face value. Yet Xylon’s logiREF-FACE-TRACK-EVK Face Detection and Tracking IP block has been the most popular IP on the Design & Reuse Web site for the last three weeks. Here’s a demo of the IP optimized for and running on a Zynq SoC:
What can you use this IP for? Xylon’s logiREF-FACE-TRACK-EVK Face Detection and Tracking Web page lists:
Driver drowsiness detection in automotive safety systems
Speaker detection in video conferencing systems
Hands-free interfaces helping disabled people to improve their daily lives
Character animations in virtual reality entertainment and gaming
There’s a logiREF-FACE-TRACK-EVK reference design that allows you to quickly evaluate and experiment with Xylon's face detection and tracking solution on the MicroZed Embedded Vision Development Kit from Avnet Electronics Marketing. This free and pre-verified design includes evaluation logicBRICKS IP cores and hardware design files prepared for the Xilinx Vivado Design Suite.
As it happens, face tracking popped into the news today in an unrelated story about the New Nintendo 3DS XL handheld gaming system that launches early next month. The VentureBeat article “New Nintendo 3DS XL impressions: Face-tracking fixes the handheld’s biggest problem” says “It may have taken four years, but Nintendo looks to have finally fixed the biggest problem with the 3DS: the 3D… One of the biggest changes to the 3DS system is almost invisible, but its inclusion makes a notable difference when it comes to gameplay. A secondary camera was added to the system to enable face-tracking. This enables the New 3DS to adjust the top screen’s image to the player’s viewing angle by tracking the player’s eyes, dynamically adjusting the 3D sweet spot on the fly. Internal hardware does the tracking, which means that even legacy games benefit from this.”
Note: I do not mean to imply here that the face tracking built into the New Nintendo 3DS XL handheld gaming system is based on Xylon IP or on Xilinx All Programmable devices. I’m merely illustrating yet another interesting application for face-tracking technology in an end product that’s going to cost about $200 in the hope that this example might give you some ideas for using face tracking in your own system designs.