UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

 

I did not go to Embedded World in Nuremberg this week but apparently SemiWiki’s Bernard Murphy was there and he’s published his observations about three Zynq-based reference designs that he saw running in Aldec’s booth on the company’s Zynq-based TySOM embedded dev and prototyping boards.

 

 

Aldec TySOM-2 Prototyping Board.jpg

 

Aldec TySOM-2 Embedded Prototyping Board

 

 

 

Murphy published this article titled “Aldec Swings for the Fences” on SemiWiki and wrote:

 

 

“At the show, Aldec provided insight into using the solution to model the ARM core running in QEMU, together with a MIPI CSI-2 solution running in the FPGA. But Aldec didn’t stop there. They also showed off three reference designs designed using this flow and built on their TySOM boards.

 

“The first reference design targets multi-camera surround view for ADAS (automotive – advanced driver assistance systems). Camera inputs come from four First Sensor Blue Eagle systems, which must be processed simultaneously in real-time. A lot of this is handled in software running on the Zynq ARM cores but the computationally-intensive work, including edge detection, colorspace conversion and frame-merging, is handled in the FPGA. ADAS is one of the hottest areas in the market and likely to get hotter since Intel just acquired Mobileye.

 

“The next reference design targets IoT gateways – also hot. Cloud interface, through protocols like MQTT, is handled by the processors. The gateway supports connection to edge devices using wireless and wired protocols including Bluetooth, ZigBee, Wi-Fi and USB.

 

“Face detection for building security, device access and identifying evil-doers is also growing fast. The third reference design is targeted at this application, using similar capabilities to those on the ADAS board, but here managing real-time streaming video as 1280x720 at 30 frames per second, from an HDR-CMOS image sensor.”

 

The article contains a photo of the Aldec TySOM-2 Embedded Prototyping Board, which is based on a Xilinx Zynq Z-7045 SoC. According to Murphy, Aldec developed the reference designs using its own and other design tools including the Aldec Riviera-PRO simulator and QEMU. (For more information about the Zynq-specific QEMU processor emulator, see “The Xilinx version of QEMU handles ARM Cortex-A53, Cortex-R5, Cortex-A9, and MicroBlaze.”)

 

Then Murphy wrote this:

 

“So yes, Aldec put together a solution combining their simulator with QEMU emulation and perhaps that wouldn’t justify a technical paper in DVCon. But business-wise they look like they are starting on a much bigger path. They’re enabling FPGA-based system prototype and build in some of the hottest areas in systems today and they make these solutions affordable for design teams with much more constrained budgets than are available to the leaders in these fields.”

 

 

EETimes’ Junko Yoshida with some expert help analyzes this week’s Xilinx reVISION announcement

by Xilinx Employee ‎03-15-2017 01:25 PM - edited ‎03-22-2017 07:20 AM (585 Views)

 

Image3.jpgThis week, EETimes’ Junko Yoshida published an article titled “Xilinx AI Engine Steers New Course” that gathers some comments from industry experts and from Xilinx with respect to Monday’s reVISION stack announcement. To recap, the Xilinx reVISION stack is a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference.

 

(See “Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge.”)

 

As Xilinx Senior Vice President of Corporate Strategy Steve Glaser tells Yoshida, “Xilinx designed the stack to ‘enable a much broader set of software and systems engineers, with little or no hardware design expertise to develop, intelligent vision guided systems easier and faster.’

 

Yoshida continues:

 

While talking to customers who have already begun developing machine-learning technologies, Xilinx identified ‘8 bit and below fixed point precision’ as the key to significantly improve efficiency in machine-learning inference systems.

 

 

Yoshida also interviewed Karl Freund, Senior Analyst for HPC and Deep Learning at Moor Insights & Strategy, who said:

 

Artificial Intelligence remains in its infancy, and rapid change is the only constant.” In this circumstance, Xilinx seeks “to ease the programming burden to enable designers to accelerate their applications as they experiment and deploy the best solutions as rapidly as possible in a highly competitive industry.

 

 

She also quotes Loring Wirbel, a Senior Analyst at The Linley group, who said:

 

What’s interesting in Xilinx's software offering, [is that] this builds upon the original stack for cloud-based unsupervised inference, Reconfigurable Acceleration Stack, and expands inference capabilities to the network edge and embedded applications. One might say they took a backward approach versus the rest of the industry. But I see machine-learning product developers going a variety of directions in trained and inference subsystems. At this point, there's no right way or wrong way.

 

 

There’s a lot more information in the EETimes article, so you might want to take a look for yourself.

 

 

 

 

Image3.jpgToday, EEJournal’s Kevin Morris has published a review article of the announcement titled “Teaching Machines to See: Xilinx Launches reVISION” following Monday’s announcement of the Xilinx reVISION stack for developing vision-guided applications. (See “Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge.”

 

Morris writes:

 

But vision is one of the most challenging computational problems of our era. High-resolution cameras generate massive amounts of data, and processing that information in real time requires enormous computing power. Even the fastest conventional processors are not up to the task, and some kind of hardware acceleration is mandatory at the edge. Hardware acceleration options are limited, however. GPUs require too much power for most edge applications, and custom ASICs or dedicated ASSPs are horrifically expensive to create and don’t have the flexibility to keep up with changing requirements and algorithms.

 

“That makes hardware acceleration via FPGA fabric just about the only viable option. And it makes SoC devices with embedded FPGA fabric - such as Xilinx Zynq and Altera SoC FPGAs - absolutely the solutions of choice. These devices bring the benefits of single-chip integration, ultra-low latency and high bandwidth between the conventional processors and the FPGA fabric, and low power consumption to the embedded vision space.

 

Later on, Morris gets to the fly in the ointment:

 

“Oh, yeah, There’s still that “almost impossible to program” issue.”

 

And then he gets to the solution:

 

reVISION, announced this week, is a stack - a set of tools, interfaces, and IP - designed to let embedded vision application developers start in their own familiar sandbox (OpenVX for vision acceleration and Caffe for machine learning), smoothly navigate down through algorithm development (OpenCV and NN frameworks such as AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN), targeting Zynq devices without the need to bring in a team of FPGA experts. reVISION takes advantage of Xilinx’s previously-announced SDSoC stack to facilitate the algorithm development part. Xilinx claims enormous gains in productivity for embedded vision development - with customers predicting cuts of as much as 12 months from current schedules for new product and update development.

 

In many systems employing embedded vision, it’s not just the vision that counts. Increasingly, information from the vision system must be processed in concert with information from other types of sensors such as LiDAR, SONAR, RADAR, and others. FPGA-based SoCs are uniquely agile at handling this sensor fusion problem, with the flexibility to adapt to the particular configuration of sensor systems required by each application. This diversity in application requirements is a significant barrier for typical “cost optimization” strategies such as the creation of specialized ASIC and ASSP solutions.

 

The performance rewards for system developers who successfully harness the power of these devices are substantial. Xilinx is touting benchmarks showing their devices delivering an advantage of 6x images/sec/watt in machine learning inference with GoogLeNet @batch = 1, 42x frames/sec/watt in computer vision with OpenCV, and ⅕ the latency on real-time applications with GoogLeNet @batch = 1 versus “NVidia Tegra and typical SoCs.” These kinds of advantages in latency, performance, and particularly in energy-efficiency can easily be make-or-break for many embedded vision applications.

 

 

But don’t take my word for it, read Morris’ article yourself.

 

 

 

 

 

As part of today’s reVISION announcement of a new, comprehensive development stack for embedded-vision applications, Xilinx has produced a 3-minute video showing you just some of the things made possible by this announcement.

 

Here it is:

 

 

Adam Taylor’s MicroZed Chronicles, Part 177: Introducing the reVision stack

by Xilinx Employee ‎03-13-2017 10:39 AM - edited ‎03-22-2017 07:19 AM (1,272 Views)

 

By Adam Taylor

 

Several times in this series, we have looked at image processing using the Avnet EVK and the ZedBoard. Along with the basics, we have examined object tracking using OpenCV running on the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PS (processing system) and using HLS with its video library to generate image-processing algorithms for the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL (programmable logic, see blogs 140 to 148 here).

 

Xilinx’s reVision is an embedded-vision development stack that provides support for a wide range of frameworks and libraries often used for embedded-vision applications. Most exciting, from my point of view, is that the stack includes acceleration-ready OpenCV functions.

 

Image1.jpg 

 

 

The stack itself is split into three layers. Once we select or define our platform, we will be mostly working at the application and algorithm layers. Let’s take a quick look at the layers of the stack:

 

  1. Platform layer: This is the lowest level of the stack and is the one on which the remaining stack layers are built. This layer includes platform definitions of the hardware and the software environment. Should we choose not to use a predefined platform, we can generate a custom platform using Vivado.

 

  1. Algorithm layer: Here we create our application using SDSoC and the platform definition for the target hardware. It is within this layer that we can use the acceleration-ready OpenCV functions along with predefined and optimized implementations for Customized Neural Network (CNN) developments such as inference accelerators within the PL.

 

  1. Application Development Layer: The highest layer of the stack. Development here is where high-level frameworks such as Caffe and OpenVX are used to complete the application.

 

As I mentioned above one of the most exciting aspects of the reVISION stack is the ability to accelerate a wide range of OpenCV functions using the Zynq SoC’s or Zynq UltraScale+ MPSoC’s PL. We can group the OpenCV functions that can be hardware-accelerated using the PL into four categories:

 

  1. Computation – Includes functions such as absolute difference between two frames, pixel-wise operations (addition, subtraction and multiplication), gradient, and integral operations
  2. Input Processing – Supports bit-depth conversions, channel operations, histogram equalization, remapping, and resizing.
  3. Filtering – Supports a wide range of filters including Sobel, Custom Convolution, and Gaussian filters.
  4. Other – Provides a wide range of functions including Canny/Fast/Harris edge detection, thresholding, SVM, HoG, LK Optical Flow, Histogram Computation, etc.

 

What is very interesting with these function calls is that we can optimize them for resource usage or performance within the PL. The main optimization method is specifying the number of pixels to be processed during each clock cycle. For most accelerated functions, we can choose to process either one or eight pixels. Processing more pixels per clock cycle reduces latency but increases resource utilization. Processing one pixel per clock minimizes the resource requirements at the cost of increased latency. We control the number of pixels processed per clock in via the function call.

 

Over the next few blogs, we will look more at the reVision stack and how we can use it. However in the best Blue Peter tradition, the image below shows the result of running a reVision Harris OpenCV acceleration function within the PL when accelerated.

 

 

Image2.jpg

 

 

Accelerated Harris Corner Detection in the PL

 

 

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg

 

Xilinx reVISION stack pushes machine learning for vision-guided applications all the way to the edge

by Xilinx Employee ‎03-13-2017 07:37 AM - edited ‎03-22-2017 07:19 AM (2,702 Views)

 

Image3.jpgToday, Xilinx announced a comprehensive suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference. It’s called the reVISION stack and it allows design teams without deep hardware expertise to use a software-defined development flow to combine efficient machine-learning and computer-vision algorithms with Xilinx All Programmable devices to create highly responsive systems. (Details here.)

 

The Xilinx reVISION stack includes a broad range of development resources for platform, algorithm, and application development including support for the most popular neural networks: AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN. Additionally, the stack provides library elements such as pre-defined and optimized implementations for CNN network layers, which are required to build custom neural networks (DNNs and CNNs). The machine-learning elements are complemented by a broad set of acceleration-ready OpenCV functions for computer-vision processing.

 

For application-level development, Xilinx supports industry-standard frameworks including Caffe for machine learning and OpenVX for computer vision. The reVISION stack also includes development platforms from Xilinx and third parties, which support various sensor types.

 

The reVISION development flow starts with a familiar, Eclipse-based development environment; the C, C++, and/or OpenCL programming languages; and associated compilers all incorporated into the Xilinx SDSoC development environment. You can now target reVISION hardware platforms within the SDSoC environment, drawing from a pool of acceleration-ready, computer-vision libraries to quickly build your application. Soon, you’ll also be able to use the Khronos Group’s OpenVX framework as well.

 

For machine learning, you can use popular frameworks including Caffe to train neural networks. Within one Xilinx Zynq SoC or Zynq UltraScale+ MPSoC, you can use Caffe-generated .prototxt files to configure a software scheduler running on one of the device’s ARM processors to drive CNN inference accelerators—pre-optimized for and instantiated in programmable logic. For computer vision and other algorithms, you can profile your code, identify bottlenecks, and then designate specific functions that need to be hardware-accelerated. The Xilinx system-optimizing compiler then creates an accelerated implementation of your code, automatically including the required processor/accelerator interfaces (data movers) and software drivers.

 

The Xilinx reVISION stack is the latest in an evolutionary line of development tools for creating embedded-vision systems. Xilinx All Programmable devices have long been used to develop such vision-based systems because these devices can interface to any image sensor and connect to any network—which Xilinx calls any-to-any connectivity—and they provide the large amounts of high-performance processing horsepower that vision systems require.

 

Initially, embedded-vision developers used the existing Xilinx Verilog and VHDL tools to develop these systems. Xilinx introduced the SDSoC development environment for HLL-based design two years ago and, since then, SDSoC has dramatically and successfully shorted development cycles for thousands of design teams. Xilinx’s new reVISION stack now enables an even broader set of software and systems engineers to develop intelligent, highly responsive embedded-vision systems faster and more easily using Xilinx All Programmable devices.

 

And what about the performance of the resulting embedded-vision systems? How do their performance metrics compare against against systems based on embedded GPUs or the typical SoCs used in these applications? Xilinx-based systems significantly outperform the best of this group, which employ Nvidia devices. Benchmarks of the reVISION flow using Zynq SoC targets against Nvidia Tegra X1 have shown as much as:

 

  • 6x better images/sec/watt in machine learning
  • 42x higher frames/sec/watt for computer-vision processing
  • 1/5th the latency, which is critical for real-time applications

 

Image1.jpg 

 

There is huge value to having a very rapid and deterministic system-response time and, for many systems, the faster response time of a design that's been accelerated using programmable logic can mean the difference between success and catastrophic failure. For example, the figure below shows the difference in response time between a car’s vision-guided braking system created with the Xilinx reVISION stack running on a Zynq UltraScale+ MPSoC relative to a similar system based on an Nvidia Tegra device. At 65mph, the Xilinx embedded-vision system’s response time stops the vehicle 5 to 33 feet faster depending on how the Nvidia-based system is implemented. Five to 33 feet could easily mean the difference between a safe stop and a collision.

 

 

Image2.jpg 

 

(Note: This example appears in the new Xilinx reVISION backgrounder.)

 

 

The last two years have generated more machine-learning technology than all of the advancements over the previous 45 years and that pace isn't slowing down. Many new types of neural networks for vision-guided systems have emerged along with new techniques that make deployment of these neural networks much more efficient. No matter what you develop today or implement tomorrow, the hardware and I/O reconfigurability and software programmability of Xilinx All Programmable devices can “future-proof” your designs whether it’s to permit the implementation of new algorithms in existing hardware; to interface to new, improved sensing technology; or to add an all-new sensor type (like LIDAR or Time-of-Flight sensors, for example) to improve a vision-based system’s safety and reliability through advanced sensor fusion.

 

Xilinx is pushing even further into vision-guided, machine-learning applications with the new Xilinx reVISION Stack and this announcement complements the recently announced Reconfigurable Acceleration Stack for cloud-based systems. (See “Xilinx Reconfigurable Acceleration Stack speeds programming of machine learning, data analytics, video-streaming apps.”) Together, these new development resources significantly broaden your ability to deploy machine-learning applications using Xilinx technology—from inside the cloud to the very edge.

 

 

You might also want to read “Xilinx AI Engines Steers New Course” by Junko Yoshida on the EETimes.com site.

 

 

 

It’s amazing what you can do with a few low-cost video cameras and FPGA-based, high-speed video processing. One example: the Virtual Flying Camera that Xylon has implemented with just four video cameras and a Xilinx Zynq Z-7000 SoC. This setup gives the driver a flying, 360-degree view of a car and its surroundings. It’s also known as a bird’s-eye view, but in this case the bird can fly around the car.

 

Many such implementations of this sort of video technology use GPUs for the video processing, but Xylon uses the programmable logic in the Zynq SoC using custom hardware designed with Xylon logicBRICKS IP cores. The custom hardware implemented in the Zynq SoC’s programmable logic enables very fast execution of complex video operations including camera lens-distortion corrections, video frame grabbing, video rotation, perspective changes, as well as the seamless stitching of four processed video streams into a single display output—and all this occurs in real time. This design approach assures the lowest possible video processing delay at significantly lower power consumption when compared to GPU-based implementations.

 

A Xylon logi3D Scalable 3D Graphics Controller soft-IP core—also implemented in the Zynq SoC’s programmable logic—renders a 3D vehicle and the surrounding view on the driver’s information display. The Xylon Surround View system permits real-time 3D image generation even in programmable SoCs without an on-chip GPU, as long as there’s programmable logic available to implement the graphics controller. The current version of the Xylon ADAS Surround View Virtual Flying Camera system runs on the Xylon logiADAK Automotive Driver Assistance Kit that is based on the Xilinx Zynq-7000 All Programmable SoC.

 

Here’s a 2-minute video of the Xylon Surround View system in action:

 

 

 

 

If you’re attending the CAR-ELE JAPAN show in Tokyo next week, you can see the Xylon Surround View system operating live in the Xilinx booth.

 

 

 

Next week, the Xilinx booth at the CAR-ELE JAPAN show at Tokyo Big Sight will hold a variety of ADAS (Advanced Driver Assistance Systems) demos based on Xilinx Zynq SoC and Zynq UltraScale+ MPSoC devices from several companies including:

 

 

  • A camera-based driver monitoring system by Fovio, a pioneer in the emerging market segment of Driver Monitoring Systems.
  • A multi-camera system with Ethernet-based audio/video Bridging by Regulus, NEC Communication Systems, and Linear Technology
  • An advanced camera-and-display E-Mirror System by Toyota Tsusho Electronics Corporation
  • A high-end surround-view system employing sensor fusion by Xylon
  • A deep-learning system based on a CNN (Convolutional Neural Networks) running on a Zynq UltraScale+ MPSoC

 

 

The Zynq UltraScale+ MPSoC and original Zynq SoC offer a unique mix of ARM 32- and 64-bit processors with the heavy-duty processing you get from programmable logic, needed to process and manipulate video and to fuse data from a variety of sensors such as video and still cameras, radar, lidar, and sonar to create maps of the local environment.

 

If you are developing any sort of sensor-based electronic systems for future automotive products, you might want to come by the Xilinx booth (E35-38) to see what’s already been explored. We’re ready to help you get a jump on your design.

 

 

 

Aldec has posted a new 4-minute video with a demonstration of its TySOM-2 Embedded Development Kit generating a 360° view from four Blue Eagle DC3K-1-LVD video cameras plugged into an FMC-ADAS card that is in turn plugged into the TySOM-2 board. The TySOM-2 board is based on a Xilinx Zynq Z-7045 SoC. The demo uses Aldec’s Multi-Camera Surround View technology.

 

 

Aldec TySOM-2 Embedded Development Kit.jpg

 

 

The four video-camera feeds appear choppy in the demo until the FPGA-based acceleration is turned on. At that point, the four video feeds appear on screen in real time with corner-detection annotation added at the full frame rate, thanks to the FPGA-based video processing.

 

Here’s Aldec’s new video:

 

 

 

 

 

 

 

This week, National Instruments (NI) announced a technology demonstration of a test system for 76-81GHz automotive radar, targeting ADAS (Advanced Driver Assistance Systems) applications. The system is based on the company’s mmWave front-end technology and its PXIe-5840 2nd-generation vector signal transceiver (VST), introduced earlier this year, which combines a 6.5GHz RF vector signal generator and a 6.5GHz vector signal analyzer in a 2-slot PXIe module. (See “NI launches 2nd-Gen 6.5GHz Vector Signal Transceiver with 5x the instantaneous bandwidth, FPGA programmability.”) The ADAS Test Solution combines NI’s banded, frequency-specific upconverters and downconverters for the 76–81GHz radar band with the 2nd-generation VST’s 1GHz of real-time bandwidth.

 

The PXIe-5840 VST gets its real-time signal-analysis capabilities from a Xilinx Virtex-7 690T FPGA.

 

 

NI PXIe-5840 2nd-generation VST.jpg

 

 

National Instruments PXIe-5840 2nd-generation vector signal transceiver (VST)

 

 

Sparkfun Autonomous Vehicle Competition (AVC): Xilinx spends a day at the races

by Xilinx Employee ‎09-17-2016 06:48 PM - edited ‎09-18-2016 08:08 AM (3,631 Views)

 

Xilinx had a table in Maker’s Alley at the 8th Annual Sparkfun Autonomous Vehicle Competition (AVC), held today in Niwot, Colorado near Boulder. AEs and software engineers from the nearby Xilinx Longmont facility staffed the table along with Aaron Behman and myself. We answered many questions and demonstrated an optical-flow algorithm running on a Zynq-based ZC706 Eval Kit. The demo accepted HDMI video from a camcorder, converted the live HD video stream to greyscale, extracted motion information on a frame-by-frame basis, and displayed the motion on a video monitor using color-coding to express the direction and magnitude of the motion, all in real time. We also gave out 50 Xilinx SDSoC licenses and awarded five Zynq-based ZYBO kits to lucky winners. Digilent supplied the kits. (See “About those Zynq-based Zybo boards we're giving away at Sparkfun’s Autonomous Vehicle Competition: They’re kits now!”)

 

 

Xilinx at Sparkfun AVC 2016.jpg 

 

The Xilinx table in Maker’s Alley at Sparkfun AVC 2016

 

 

In case you are not familiar with the Sparkfun AVC, it’s an autonomous vehicle competition and this year, there were two classes of autonomous vehicle: Classic and Power Racing. The Classic class vehicle was about the size of an R/C car and raced on an appropriately sized track with hazards including the Discombobulator (a gasoline-powered turntable), a ball pit, hairpin turns, and an optional dirt-track shortcut. The Power Racing class is based on kid’s Power Wheels vehicles, which are sized to be driven by young kids but in this race were required to be carrying adults. There were races for both autonomous and human-driven Power Racers.

 

Here’s a video of one of the Sparkfun AVC Classic races getting off to a particularly rocky start:

 

 

 

 

 

Here’s a short video of an Autonomous Power Racing race, getting off to an equally disastrous start:

 

 

 

 

And here’s a long video of an entire, 30-lap, human-driven Power Racing race:

 

 

 

 

 

Sparkfun AVC 2016 Program.jpg

 

 

On Saturday, September 17, you’ll be able to get one of 50 free license vouchers for the Xilinx SDSoC Development Environment, which we’re pre-loading along with Vivado HL on a USB drive so you won’t even need to download the software. (Worth $995!)

 

Oh yes, we're also giving away five Digilent Zybo Trainer Boards based on the Xilinx Zynq-7000 All Programmable SoC worth $189 each.

 

Where and how?

 

At the Xilinx Tent in Maker Alley, part of Sparkfun’s 8th annual Autonomous Vehicle Competition (AVC) in Niwot, Colorado. (That’s between Boulder and Longmont if you don’t know about Google Maps.)

 

There’s one tiny catch. You need an admission ticket to get in.

 

How much? Early bird AVC tickets are on sale here for $6. Admission at the door on the day of the AVC is $8. That’s a tiny, tiny price for a full day of entertainment watching autonomous vehicles race against time while fighting robots maul or burn each other to a cinder.

 

 

Sparkfun AVC Fire tank.jpg

 

 

However, there’s a way to knock another buck off the already low, low early bird admission price; there’s the secret discount code: SFEFRIENDS.

 

See you in Niwot. Wear your asbestos underpants.

 

For more information about the Sparkfun AVC and the Xilinx SDSoC giveaway, see:

 

 

 

 

 

 

 

Xilinx will be at the Sparkfun Autonomous Vehicle Competition, September 17. Will you? Let us know!

by Xilinx Employee ‎08-17-2016 12:20 PM - edited ‎08-17-2016 12:30 PM (4,496 Views)

 

Xilinx will be attending this year’s Sparkfun AVC (Autonomous Vehicle Competition) in Colorado on September 17. Haven’t heard about the Sparkfun AVC? Incredibly, this is its eighth year and there are four different competitions this year:

 

  • Classic—A land-based speed course for autonomous vehicles weighing less than 25 pounds. Beware the discombobulator and avoid the ball pit of despair!
  • PRS—Power racing series based on the battery-powered kiddie ride-‘em toys
  • A+PRS—The autonomous version of the PRS competition
  • Robot Combat Arena—Ant-weight and beetle-weight robots fight to the death. Note: Fire is allowed. "We like fire."

 

Sparkfun’s AVC is taking place in the Sparkfun parking lot. Sparkfun is located in beautiful Niwot, Colorado. Where’s that? On the Diagonal halfway between Boulder and Longmont, of course.

 

Haven’t heard of Sparkfun? They’re an online electronics retailer at the epicenter of the maker movement. Sparkfun’s Web site is chock full of tutorials and just-plain-weird videos for all experience levels from beginner to engineer. I’m a regular viewer of the company’s Friday new-product videos. Also a long-time customer.

 

Xilinx will be exhibiting an embedded-vision demo in Maker’s Alley tent at AVC this year because Xilinx All Programmable devices like the Zynq-7000 SoC and Zynq UltraScale+ MPSoC give you a real competitive advantage when developing a quick, responsive autonomous vehicle.

 

If you are entering this year’s AVC and are using Xilinx All Programmable devices in your vehicle, please let me know in the comments below or come to see us in the tent at the event. We want to help make you famous for your effort!

 

Here’s an AVC video from Sparkfun to give you a preview of the AVC:

 

 

 

Here's the PRS video:

 

 

And here's the Robot Combat video:

 

 

Xylon has introduced logiADAK 3.2, the latest version of the company’s ADAS toolset for the Xilinx Zynq-7000 SoC. This new release includes a new toolset for driver drowsiness detection based on facial movements monitored through a camera placed in a vehicle cabin and significantly expanded and improved forward camera collision avoidance ADAS based on detection and recognition of vehicles, pedestrians and bikes. The current logiADAK kit includes around ten different ADAS applications, ranging from design frameworks to complete, production-ready solutions that help you create highly differentiated driver assistance applications.

 

 

If you’re not yet familiar with Xylon’s logiADAK toolkit, here’s Xilinx’s Aaron Behman with a quick, 90-second video demo shot at the recent Embedded Vision Summit:

 

 

 

 

 

Megatrends and the Xilinx Corporate Transformation—The Corporate Video

by Xilinx Employee ‎04-07-2016 12:20 PM - edited ‎04-07-2016 03:39 PM (7,455 Views)

 

If you look at what’s happening with Moore’s Law (just read any article about the topic during the last two years), you see that systems design is being forced to make use of All Programmable devices at an increasing rate because of the enormous NRE costs associated with roll-your-own ASICs at 16nm, 10nm, and below. Companies still need the differentiation afforded by custom hardware to boost product margins in their competitive, global marketplaces, but they need to get it in a different way.

 

Nowhere is that more true than in the six Megatrends that Xilinx has identified:

 

 

 

These Megatrends drive the future of the electronics industry—and they drive Xilinx’s future as well. Xilinx has made a slick, 4-minute video discussing these trends:

 

 

 

 

 

 

 

 

Frost and Sullivan 2016 ADAS Award to Xilinx.jpgTruthfully, I didn’t write that headline. It’s the title of yesterday’s Frost & Sullivan press release awarding Xilinx the 2016 North American Frost & Sullivan Award for Product Leadership, based on the consulting firm’s recent analysis of the automotive programmable logic devices market for advanced driver assistance systems (ADAS). The press release continues: “Xilinx is uniquely positioned to cater to current and future market needs.”

 

To date, you’ve seen very little in the Xcell Daily blog about Xilinx and ADAS systems, not because Xilinx isn’t working closely with automotive Tier 1 suppliers and OEMs on ADAS systems but because those companies really have not wanted any publicity about that highly competitive work and so I could not write about the many, many design wins. In reality, more than 20 of these automotive suppliers and OEMs have been working with Xilinx on ADAS designs over the last few years.

 

The subhead of the Frost & Sullivan press release captures the reality of this effort:

 

Superior product value has made Xilinx’s devices the preferred choice for current and evolving ADAS modules among global OEMs.

 

And, since I’m already quoting from this Frost & Sullivan press release, let me add this quote:

 

“The company has strong technical capabilities and a successful track record in multiple sensor applications that include radar, light detection and ranging (LIDAR), and camera systems, all of which give it an edge over competing system on chip (SoC) suppliers,” said Frost & Sullivan Industry Analyst, Arunprasad Nandakumar. “Xilinx’s Zynq UltraSCALE+ multiprocessor SoC (MPSoC), scores high on scalability, modularity, reliability, and quality.”

 

And this:

 

“Xilinx adheres to self-defined standards that exceed industry requirements. Its FPGAs and PLDs are far ahead of the baseline defined by AEC-Q100, which is the standard stress test qualification requirement for electronic components used in automotive applications. In fact, Xilinx has introduced its own Beyond AEC-Q100 testing that characterizes its robust XA family of products.”

 

And this final quote sums it up:

 

“In recognition of its strong product portfolio, which is aligned perfectly with the vision of automated driving, Xilinx receives the 2016 North American Frost & Sullivan Product Leadership Award. Each year, this award is presented to the company that has developed a product with innovative features and functionality, gaining rapid acceptance in the market. The award recognizes the quality of the solution and the customer value enhancements it enables.

 

“Frost & Sullivan’s Best Practices Awards recognize companies in a variety of regional and global markets for outstanding achievement in areas such as leadership, technological innovation, customer service, and product development. Industry analysts compare market participants and measure performance through in-depth interviews, analysis, and extensive secondary research.”

 

 

Would you like to see the results of those in-depth interviews, analysis, and extensive secondary research? Thought you might.

 

There’s a companion 12-page Frost & Sullivan research paper attached to this blog. Just click below.

 

 

I’ve written previously about Apertus, the Belgian company behind the AXIOM open-source 4K cinema camera effort. (See below.) I met with two of the Apertus principals, Sebastian Pichelhofer and Herbert Pötzl, at last month’s Embedded World 2016 in Nuremberg. They carry the coolest business cards I’ve seen in a long, long time:

 

 

Apertus Business Cards.jpg

 

 

Pichelhofer and Pötzl were making the rounds at the Embedded World show to talk about their 3rd-generation AXIOM camera, the Gamma. This is the big, modular, pro-level 4K cinema camera that leverages the knowledge gained in the design of the AXIOM Alpha and Beta cameras. Like the earlier cameras, the AXIOM Gamma is based on a CMOSIS imager and a Xilinx Zynq-7000 SoC (a Z-7030). The AXIOM Beta is based on an Avnet MicroZed SOM with a Zynq Z-7020 SoC.

 

Here’s a closeup photo of the AXIOM Beta’s Image Sensor Module:

 

 

 

AXIOM Gamma Imaging Module.jpg

 

 

AXIOM Beta 4K Cinema Camera Image Sensor Module

 

 

And here’s a photo of the back of the AXIOM Beta Image Sensor Module showing the Zynq-based Avnet MicroZed board that’s currently being used:

 

 

 

AXIOM Gamma Imaging Module Back with MicroZed.jpg

 

 

Back Side of AXIOM Gamma 4K Cinema Camera Image Sensor Module showing Avnet MicroZed SOM

 

 

 

The AXIOM Beta is currently operational and the gents from Apertus directed me to the Antmicro booth at the show to see a working model. Here’s a photo from the Antmicro booth:

 

 

 

AXIOM Beta in Antmicro Booth.jpg

 

 

A working AXIOM Beta 4K camera in the Antmicro booth

 

 

 

Antmicro, located in Poland, is a partner working with Apertus on the AXIOM camera. Although I didn’t see it at Embedded World, here’s a photo of the AXIOM Gamma Image Sensor Module prototype from the Antmicro Web site:

 

 

 

Antmicro AXIOM Gamma Image Sensor Module Prototype.jpg

 

 

AXIOM Gamma 4K Cinema Camera Image Sensor Module

 

 

 

While at the Antmicro booth, I met team leader Karol Gugala, who impressed me with his knowledge of the Zynq-7000 SoC. He’s already developed several Zynq-based projects including a distance-measuring system for an autonomous mining vehicle based on stereo video imagers. Here’s a photo of that project taken at the Antmicro booth:

 

 

Antmicro Zynq-based Stereo Distance Measuring Board.jpg

 

 

Antmicro Zynq-based Stereo Distance Measuring Board

 

 

Although we spoke for only 10 minutes or so, I was really impressed with Gugala’s knowledge and his considerable experience with the Zynq-7000 SoC. I immediately dubbed him “King of Zynq,” in my mind at least. Antmicro is currently working with Apertus on the AXIOM Gamma design and I can hardly wait to see what this international team produces.

 

Earlier Xcell Daily blog posts about the AXIOM 4K cinema cameras:

 

 

 

 

 

 

Xylon’s logiADAK Automotive Driver Assistance Kit and logiRECORDER Multi-Channel Video Recording ADAS Kit provide you with a number of essential building blocks needed to develop your own vision-based ADAS (advanced driver assist system) systems based on the Xilinx Zynq SoC for a wide range of vehicle designs. The logiADAK kit comes with a full set of DA demo applications, customizable reference SoC designs, software drivers, libraries, and documentation. The logiRECORDER kit includes hardware and software necessary for synchronous video recording of up to six uncompressed video streams from Xylon video cameras.

 

Xylon has just published a short video showing these kits in action:

 

 

 

 

Baby you can drive my car: Zynq runs a 5-camera ADAS demo at CAR-ELE in Japan

by Xilinx Employee ‎01-13-2016 08:59 AM - edited ‎01-13-2016 09:00 AM (12,429 Views)

 

The CAR-ELE show for automotive OEMs and Tier 1 suppliers kicked off at Tokyo Big Sight in Japan today and I received this image of an RC car equipped with five video cameras and a Zynq SoC from Naohiro Jinbo at the Xilinx booth:

 

 

CAR-ELE 5-camera Zynq demo.jpg 

 

The image shows a transparent-bodied RC car equipped with the five video cameras facing off against four pedestrians and two other vehicles towards the bottom of the image. You can also see two screen pairs at the top of the booth. The left screen in the rightmost screen pair shows a bird’s-eye view around the RC car. That image is a real-time fusion of the five video streams from the cameras on the RC car. The other screen in the rightmost pair shows real-time object detection in action. Pedestrians are highlighted in bounding boxes. Both screens are generated live by the car’s on-board Zynq SoC and both of these demos rely on the programmable logic in the Zynq SoC to perform the heavy lifting required by the real-time video processing.

 

This 5-Camera ADAS Development Platform demo is being presented by Xylon, eVS (embedded Vision Systems), and DDC (Digital Design Corp). The demo is based on Xylon’s logiADAK Driver Assistance Kit version 3.1, which extends the functionality of the company’s logiADAK platform to include efficient multi-object classification, encompassing vehicle and cyclist detection in addition to pedestrian detection.

 

 

Tokyo Big Sight.jpg

 

 

"Tokyo Big Sight at Night" by Masato Ohta from Tokyo, Japan. - Flickr. Licensed under CC BY 2.0 via Commons

 

 

If news of last week’s ADAS-fest at CES in Las Vegas has peaked your interest in self-driving and assisted-driving technology, you can get up close and personal with that technology by attending this week’s CAR-ELE in Tokyo. Xilinx and its partners will be demonstrating several operational ADAS technologies based on the new Xilinx Zynq UltraScale MPSoC and the battle-tested Zynq-7000 SoC in the Xilinx booth (W8-54).

 

Among the demos: a 5-Camera ADAS Development Platform presented by Xylon, eVS (embedded Vision Systems), and DDC (Digital Design Corp). The 5-camera demo is based on Xylon’s logiADAK Driver Assistance Kit version 3.1for the Xilinx Zynq-7000 SoC. Xylon’s logiADAK 3.1 extends the functionality of the company’s logiADAK platform to include efficient multi-object classification, encompassing vehicle and cyclist detection in addition to pedestrian detection. The logiADAK kit includes everything you need to install a system on your own vehicle including five sealed megapixel cameras.

 

In the Xilinx booth at the CAR-ELE show, you’ll see a logiADAK 3.1 platform mounted on a remote-control car that you can drive in “parking lot” installed in the booth.

 

logiADAK 5-camera Demo .jpg

 

 

In the race to develop self-driving cars, ADAS (Advanced Driver Assistance Systems) designs need to account for the human driver’s condition for situations when the human might ask or be required to take over the driving. Xylon has just introduced a new ADAS IP core designed to detect drowsiness and distraction in facial movements of drivers. The logiDROWSINE Driver Drowsiness Detector IP can be integrated into the Xilinx Zynq SoC to monitor facial movements as imaged by a video camera in the vehicle’s cabin. The logiDROWSINE IP core monitors the driver’s eyes, gaze, eyebrows, lips and head and it continuously tracks facial features that can indicate microsleep. It also looks for yawns and other indications of sleepiness. In all, the logiDROWSINE IP core detects recognize seven levels of drowsiness. When the IP determines that the driver appears drowsy, it alerts the associated ADAS system so that proper steps are taken. Such steps might include an audible alert or a vibrating seat.

 

The logiDROWSINE IP is split between the Zynq SoC’s programmable hardware and software that runs on one of the Zynq SoC’s two ARM Cortex-A9 MPCore processors. The complete driver drowsiness SoC design includes the logiDROWSINE IP core, the logiFDT face-detection and –tracking IP core, and other IP cores. All of this fits into the smallest Xilinx Zynq SoC—the Z-7010. It is prepackaged for the Xilinx Vivado Design Suite and IP deliverables include the software driver, documentation and technical support.

 

Here’s a short video demo of the logiDROWSINE IP core in action:

 

 

 

Xilinx is a Thomson Reuters Top 100 Global Innovator—again—and innovation helps drive the Megatrends

by Xilinx Employee ‎11-13-2015 11:32 AM - edited ‎01-06-2016 10:58 AM (13,063 Views)

Thomson Reuters 2015 Top 100 Global Innovators.jpg 

 

 

There are a lot of awards in our industry and I do not normally blog about them. However, I do make exceptions and the annual Thomson Reuters Top 100 Global Innovators award is one of those exceptions. For the fourth year in a row, Thomson Reuters has named Xilinx in its Top 100 Global Innovators report. Xilinx innovations are directly aimed at helping customers integrate the highest levels of software-based intelligence with hardware optimization and any-to-any connectivity in all applications including those associated with six key Megatrends (5G Wireless, SDN/NFV, Video/Vision, ADAS, Industrial IoT, and Cloud Computing) shaping the world’s industries today.

 

According to SVP David Brown, Thomson Reuters uses a scientific approach to analyzing metrics including patent volume, application-to-grant success, globalization and citation influence. Consequently, this award is based on objective criteria and is not a popularity contest, which is why I consider it bloggable. That, and Xilinx’s presence on the Top 100 list this year, and in 2012, 2013, and 2014. (Note: The top 100 innovators are not ranked. You’re either on the list—or you’re not. Xilinx is.)

 

Brown writes in a preface to the report:

 

“…we’ve developed an objective formula that identifies the companies around the world that are discovering new inventions, protecting them from infringers and commercializing them. This is what we call the “Lifecycle of Innovation:” discovery, protection and commercialization. Our philosophy is that a great idea absent patent protection and commercialization is nothing more than, a great idea.”

 

He continues:

 

“…for five consecutive years the Thomson Reuters Top 100 companies have consistently outperformed other indices in terms of revenue and R&D spend. This year, our Top 100 innovators outperform the MSCI World Index in revenue by 6.01 percentage points and in employment by 4.09 percentage points. We also outperform the MSCI World Index in market-cap-weighted R&D spend by 1.86 percentage points. The conclusion: investment in R&D and innovation results in higher revenue and company success.”

 

Here’s a video showing Thomson Reuters Senior IP Analyst Bob Stembridge describing the methodology for determining the world’s most innovative companies for this report:

 

 

 

 

 

For more information about this fascinating study and report, use the link above and download the report PDF.

 

Shaping the Future of System Design: FPGAs, Evolution, Megatrends, and Quality

by Xilinx Employee ‎11-10-2015 01:00 PM - edited ‎11-10-2015 01:04 PM (9,066 Views)

 

FPGA usage has evolved from its early use as glue logic, as reflected in the six Megatrends now making significant use of Xilinx All Programmable devices: 5G Wireless, SDN/NFV, Video/Vision, ADAS, Industrial IoT, and Cloud Computing. Today, you’re just as likely to use one Xilinx All Programmable device to implement a single-chip system because that’s the fastest way to get from concept to working, production systems. Consequently, system-level testing of Xilinx devices has similarly evolved to track these more advanced uses for the company’s products.

 

If you’d like more information about this new level of testing, a good place to look is page 11 of the just-published 2015 Annual Quality Report from Xilinx. (You just might want to take a look at all of the report’s pages while you’re at it.)

 

 

Normally, I would never steer you towards a press-announcement video but I’ve got one that you’re going to want to watch. At the end of this blog you’ll find a 38-minute video of last week’s press announcement, made in conjunction with newly announced partners Xilinx and Mellanox, unveiling Qualcomm’s 64-bit, ARM-based, many-core Server SoC. (See last week’s “Qualcomm and Xilinx Collaborate to Deliver Industry-Leading Heterogeneous Computing Solutions for Data Centers” for details.) The video includes a demo of Qualcomm’s working Server Development Platform.

 

Read more...

Autonomous hex-copter from ETH Zurich relies on Zynq SoC to build 3D maps and avoid obstacles in real time

by Xilinx Employee ‎09-03-2015 01:37 PM - edited ‎01-06-2016 12:55 PM (16,588 Views)

Six researchers at ETH Zurich have developed a 1kg, autonomous hex-copter they’ve named the AscTec Firefly that uses four stereo-pair cameras to create 3D disparity maps of its surroundings to sense and avoid obstacles in real time. That’s a very useful skill for an autonomous vehicle designed to navigate around people or through a forest, for example. Rather than rely on ultrasound ranging systems or time-of-flight imagers, the AscTec Firefly relies of four stereo camera pairs equipped with ultra-wide-angle lenses. The stereo vision permits the creation of a 3D map of the copter’s surroundings.

 

 

AscTec Firefly Hex Copter.jpg

 

 

AscTec Firefly Autonomous Hex-Copter

 

 

 

 

Read more...

De Facto Standard Platforms for ADAS and Beyond

by Xilinx Employee on ‎07-29-2015 10:25 AM (13,434 Views)

 

By Mike Santarini, Publisher, Xcell Journal, Xilinx

 

Xilinx has a rich history in the automotive market, but over the last four years and with the commercial launch of the Zynq-7000 All Programmable SoC in 2011, the company has quickly become the platform provider of choice in the burgeoning market for advanced driver assistance systems (ADAS). Companies such as Mercedes-Benz, BMW, Nissan, VW, Honda, Ford, Chrysler, Toyota, Mazda, Acura, Subaru and Audi are among the many OEMs that have placed Xilinx FPGAs and Zynq SoCs at the heart of their most advanced ADAS systems. And with the new Zynq UltraScale+ MPSoCs, Xilinx is sure to play a leadership role in the next phases of automotive electronic innovation: autonomous driving, vehicle-to-vehicle communication and vehicle-to-infrastructure communication.

 

 

Read more...

The Coming Revolution in Vehicle Technology and its BIG Implications

by Xilinx Employee ‎07-27-2015 04:27 PM - edited ‎01-06-2016 01:40 PM (14,374 Views)

 

By Thomas Gage and Jonathan Morris, Marconi Pacific

 

ADAS makes safety and marketing sense. Whether it is Daimler, Toyota, Ford, Nissan, GM, another vehicle OEM or even Google, none are going to put vehicles on the road that can steer, brake or accelerate autonomously without having confidence that the technology will work. ADAS promises to first reduce accidents and assist drivers as a “copilot” before eventually taking over for them on some and eventually their entire journey as an “autopilot.”

 

As for how quickly the impacts of this technology will be felt, the adoption curves for any new technology look very similar to one another. For example, the first commercial mobile-phone network went live in the United States in 1983 in the Baltimore-Washington metropolitan area. At the time, phones cost about $3,000 and subscribers were scarce. Even several years later, coverage was unavailable in most of the country outside of dense urban areas. Today there are more mobile-phone subscriptions than there are people in the United States, and more than 300,000 mobile-phone towers connect the entire country. Low-end smartphones cost about $150. Vehicle technology is moving forward at a similar pace.

 

 

Read more...

Xilinx Customers Shape a Brilliant Future

by Xilinx Employee ‎07-15-2015 12:07 PM - edited ‎01-06-2016 01:49 PM (14,353 Views)

 

By Mike Santarini, Publisher, Xcell Journal

 

Six important emerging markets—video/vision, ADAS/autonomous vehicles, Industrial Internet of things, 5G wireless, SDN/NFV and cloud computing—will soonmerge into an omni-interconnected network of networks that will have a far-reaching impact on the world we live in. This convergence of intelligent systems will enrich our lives with smart products that are manufactured in smart factories and driven to us safely in smart vehicles on the streets of smart cities—all interconnected by smart wired and wireless networks deploying services from the cloud.

 

Xilinx Inc.’s varied and brilliant customer base is leveraging Xilinx All Programmable devices and software-defined solutions to make these new markets and their convergence a reality. Let’s examine each of these emerging markets and take a look at how they are coming together to enrich our world. Then we’ll take a closer look at how customers are leveraging Xilinx devices and software-defined solutions to create smarter, connected and differentiated systems that in these emerging markets to shape a brilliant future for us all.

 

 

Megatrends.jpg

 

IT STARTS WITH VISION: Vision systems are everywhere in today’s society. You can find cameras with video capabilities in an ever-growing number of electronic systems, from the cheapest mobile phones to the most advanced surgical robots to military and commercial drones and unmanned spacecraft exploring the universe. In concert, the supporting communications and storage infrastructure is quickly shifting gears from a focus on moving voice and data to an obsession with fast video transfer.

 

ADAS’ DRIVE TO AUTONOMOUS VEHICLES: If you own or have ridden in an automobile built in the last decade, chances are you have already experienced the value of ADAS technology. Indeed, perhaps some of you wouldn’t be here to read this article if ADAS hadn’t advanced so rapidly. The aim of ADAS is to make drivers more aware of their surroundings and thus better, safer drivers.

 

IIOT’S EVOLUTION TO THE FOURTH INDUSTRIAL REVOLUTION: The term Internet of Things has received much hype and sensationalism over the last 20 years—so much so that to many, “IoT” conjures up images of a smart refrigerator that notifies you when your milk supply is getting low and the wearable device that receives the “low-milk” notification from your fridge while also fielding texts, tracking your heart rate and telling time. These are all nice-to-have, convenience technologies. But to a growing number of people, IoT means a great deal more. In the last couple of years, the industry has divided IoT into two segments: consumer IoT for convenience technologies (such as nifty wearables and smart refrigerators), and Industrial IoT (IIoT), a burgeoning market opportunity addressing and enabling some truly major, substantive advances in society.

 

INTERCONNECTING EVERYTHING TO EVERYTHING ELSE: In response to the need for better, more economical network topologies that can efficiently and affordably address the explosion of data-based services required for online commerce and entertainment as well as the many emerging IIoT applications, the communications industry is rallying behind two related network topologies: software-defined networks and network function virtualization.

 

SECURITY EVERYWHERE: As systems from all of these emerging smart markets converge and become massively interconnected and their functionality becomes intertwined, there will be more entry points for nefarious individuals to do a greater amount of harm affecting a greater amount of infrastructure and greater number of people. The many companies actively participating in bringing these converging smart technologies to market realize the seriousness of ensuring that all access points in their products are secure. A smart nuclear reactor that can be accessed by a backdoor hack of a $100 consumer IoT device is a major concern. Thus, security at all point points in the converging network will become a top priority, even for systems that seemingly didn’t require security in the past.

 

XILINX PRIMED TO ENABLE CUSTOMER INNOVATION: Over the course of the last 30 years, Xilinx’s customers have become the leaders and key innovators in all of these markets. Where Xilinx has played a growing role in each generation of the vision/video, ADAS, industrial, and wired and wireless communications segments, today its customers are placing Xilinx All Programmable FPGAs, SoCs and 3D ICs at the core of the smarter technologies they are developing in these emerging segments.

 

 

Note: This blog post has been excerpted from Mike Santarini’s far more detailed article in the special Megatrends issue of Xcell Journal (Issue 92) that has just been published. To read the full article, click here or download a PDF of the entire issue by clicking here.

 

 

 

Kudos to Customers: Xcell Journal Issue 92 now online

by Xilinx Employee on ‎07-13-2015 02:01 PM (11,845 Views)

 

By Mike Santarini, Publisher, Xcell Journal

 

The new special issue of Xcell Journal celebrates the ways in which Xilinx customers are enabling a new era of innovation in six key emerging markets: vision/video, ADAS/autonomous vehicles, Industrial IoT, 5G, SDN/NFV and cloud computing. Each of these segments is bringing truly radical new products to our society. And as the technologies advance over the next few years, the six sectors will converge into a network of networks that will bring about substantive changes in how we live our lives daily.

 

Vision systems are quickly becoming ubiquitous, having long since evolved beyond their initial niches in security, digital cameras and mobile devices. Likewise undergoing rapid and remarkable growth are advanced driver assistance systems (ADAS), which are getting smarter and expanding to enable vehicle-to-vehicle communications (V2V) for autonomous driving and vehicle-to-infrastructure (V2I) communications that will sync vehicles with smart transportation infrastructure to coordinate traffic for an optimal flow through freeways and cities.

 

These smart vision systems, ADAS and infrastructure technologies form the fundamental building blocks for emerging Industrial Internet of Things (IIoT) markets like smart factories, smart grids and smart cities—all of which will require an enormous amount of wired and wireless network horsepower to function. Cloud computing, 5G wireless and the twin technologies of software-defined networking (SDN) and network function virtualization (NFV) will supply much of this horsepower.

 

Converged, these emerging technologies will be much greater than the sum of their individual parts. Their merger will ultimately enable smart cities and smart grids, more productive and more profitable smart factories, and safer travel with autonomous driving.

 

 

Note: This blog post has been excerpted from the full article in the new Xcell Journal, Issue 92. To read the full article, click here or download a PDF of the entire issue by clicking here.

If you visited Xilinx.com today, you will have noticed a very different representation of Xilinx. The Web site change represents Xilinx’s latest step forward in an ongoing corporate transformation into a new era of offerings. The change also brings focus on six key “Megatrends” that are changing the world we live in:

 

 

 

 

Xilinx participates in all of these Megatrends and you’ll find a substantial amount of new material about them in the redesigned Xilinx.com Web site. You’ll also discover a significant amount of new information about the design and development solutions that are uniquely Xilinx, based on the company’s All Programmable (hardware, software, I/O programmability) device technology (FPGAs, SoCs, and MPSoCs) and a combination of industry-standard and unique software tools in the growing SDx family of development environments that support rapid, high-level development using Xilinx devices.

 

You will also discover extensive and intensely interesting coverage of these Megatrends in the latest, just-published edition of Xcell Journal. Click here to read the new edition of Xcell Journal online or here to download the PDF.

 

Note: If you usually access the Xcell Daily blog using the link on the Xilinx.com home page, it has moved. You’ll now find it under the “About” drop-down tab at the top of every Web page on Xilinx.com. So no matter where you are on the site, Xcell Daily is just a couple of clicks away.

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.