UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Xilinx Brings Hot Stuff to Hot Chips

by Xilinx Employee on ‎08-16-2018 09:56 AM

 

hclogo big.jpg

 

 

Visit the upcoming Hot Chips 2018 industry conference in Silicon Valley, and you’ll see how dominant Xilinx is becoming in the market for high-performance chips and platforms.

 

The conference, which runs Aug. 19-21 at the Flint Center for the Performing Arts in Cupertino, Calif., will share details around a key element of our new adaptive compute acceleration platform (ACAP), deliver three talks on artificial intelligence – including one talk from newly acquired DeePhi Tech -- and include an important keynote address by our CEO, Victor Peng. Peng’s topic, Adaptable Intelligence: the Next Computing Era, will reveal the next best thing in tech and Xilinx’s key role in delivering it. He’ll present this important message at 11:45 a.m. PT, on Tuesday, Aug. 21.

 

Earlier that same day, Juanjo Noguera, engineering director for the Xilinx Architecture Group, will introduce the audience to a key component of the company’s new ACAP platform, which was introduced only in March, and delivers a highly integrated multi-core compute system that can be programmed at the hardware and software levels to adapt to the needs of a wide range of applications and workloads.

 

Noguera’s talk is titled HW/SW Programmable Engine: Increased Compute Density Architecture for Project Everest. This will be a first peek at one of the novel heterogeneous components of the forthcoming Everest product family built on the new ACAP platform. It will provide orders of magnitude better performance and other improvements over what’s available now. His presentation will be delivered at 9:45 a.m. PT on Tuesday, Aug. 21.

 

In another Xilinx presentation, Rahul Nimaiyar, director of Data Center and IP Solutions, will describe the deep neural network (DNN) processor for Xilinx FPGAs that is currently available for use in the Amazon Web Services (AWS) F1 instance. His talk Xilinx Tensor Processor: An Inference Engine, Network Compiler + Runtime for Xilinx FPGAs, will be presented at 4 p.m. PT on Tuesday, Aug. 21.

 

Attendees can also expect a talk from the newest member of the Xilinx family - Beijing-based DeePhi Tech, - which Xilinx acquired on July 18. Xilinx was impressed with DeePhi’s industry-leading capabilities in deep compression for machine learning, and system-level network optimization. DeePhi will give a presentation titled The Evolution of Deep Learning Accelerators Upon the Evolution of Deep Learning Algorithms, at 3:30 p.m. PT, also on Tuesday, Aug 21.

 

Also at Hot Chips, Michaela Blott, principal engineer for Xilinx Research, will share her insights from the forefront of Xilinx research in a tutorial on architectures for deep neural nets called Deep Learning and Computer Architectures. This takes place at 2 p.m. on Sunday, Aug. 19.

 

By Willard Tu, Xilinx Senior Director, Automotive

 

While we haven’t always been loudly honking our own car horn as we should, Xilinx has a strong pedigree in automotive. For more than 12 years, we’ve shipped over 40 million cumulative units to automakers and Tier 1 automotive suppliers. In the majority of recent deployments, Xilinx devices are being used by Tier 1s to provide processing power for the camera and sensor systems they are developing for advanced driver-assistance systems (ADAS) and autonomous vehicles.

 

But why FPGA for these systems? We get that question often, so I’ve pulled together a list of five key reasons why automakers and Tier 1s are choosing Xilinx FPGA-based technology for their camera- and sensor-based driving systems.

 

- Ability to easily customize and differentiate – ASICs and GPUs are one-size-fits-all solutions. Because of the programmable nature of FPGAs, automakers can customize their chips to run proprietary image-processing algorithms, for example, enabling features that differentiate their models from the competition. Do you want a driver-monitoring camera that tracks both the driver’s eyes and head position? No problem. And what happens if your imager changes from 4MP to 8MP? Again, not a problem because you’ll be readily able to customize.

 

2 - Open-box architecture – Other market solutions are effectively a “black box,” which doesn’t enable a Tier 1 or OEM to know what other capabilities are inside of it. With Xilinx technology, automotive OEMs see exactly what they get, and can customize their system to meet changing regulatory conditions, compliance with functional safety standards, etc. Automotive OEMs tell us they need to know what they are getting. The black-box design keeps them in the dark on this.

 

3 - Ability to position anywhere – The FPGA architecture is inherently thermally efficient, enabling OEMs to locate the devices anywhere in or on the vehicle: inside of the ADAS central processing unit, inside of the car or even on the windshield. It doesn’t matter.

 

4 - Scalability – Since it is possible to readily re-program and add processing power to Xilinx’s SoCs, it is easy for a Tier 1 or OEM to scale their systems to meet needs for increased complexity, speed or capabilities. The architecture enables them to add more programmable fabrics as demanding by applications.

 

5 - Adaptability  The automotive industry today is moving quickly and imposing ever-changing requirements on automakers. For example, the European NCAP (New Car Assessment Programme) organization sets standards for safety—encompassing features like lane-keeping and auto-braking—that are updated every few months. Traditional chip architectures take one to two years to design and get to market (and even that is aggressive). OEMs and Tier 1s can adjust Xilinx devices on the fly, to meet changing needs.

 

We are proud of our steady growth in the automotive market, which is the result of these unique benefits of our flexible, programmable FPGA and SoC technology.

 

But the differentiators don’t stop with the five listed above. Watch this space for an upcoming post where we share two exciting, emerging automotive applications that are possible only with FPGA.

 

auto_growth.jpg

 

A unique combination of benefits makes Xilinx devices increasingly the choice of automotive Tier 1 suppliers and OEMs.

 

 

Xilinx, as one of the creators of field-programmable gate array (FPGA) technology for integrated-circuit design, has long embraced high-level synthesis (HLS) as an automated design process that interprets a desired behavior in order to create hardware that delivers that behavior. Xilinx has just introduced a book that clearly explains the process of creating an optimized hardware design using HLS.

 

The book, “Parallel Programming for FPGAs,” by Stephen Neuendorffer, Principal Engineer at Xilinx, together with Ryan Kastner from UCSD and Janarbek Matai from Cognex, is a practical guide for anyone interested in building FPGA systems. It is of particular value to students in advanced undergraduate and graduate courses. But it can also be useful for system designers and embedded programmers already on the job.

 

The book assumes the reader has a working knowledge of C/C++ programming -- which is like assuming someone knows how to drive a car with an automatic transmission -- and assumes familiarity with other basic computer architecture concepts. The book also includes a significant amount of sample code. Any reader of the book is strongly encouraged to fire up a Vivado HLS and try the sample code out for themselves. Free licenses are available through Vivado WebPack Edition, or a free 30-day trial of Vivado System Edition.

 

The book also includes several textbook-like features that make it particularly valuable in a classroom setting. For instance, it also asks questions within each chapter that will challenge the reader to help solidify their understanding of the material as they read along. There are also associated projects that were developed and used in an HLS class taught at the University of California at San Diego (UCSD). UCSD will make the files for these projects available to instructors upon request. Each project is more or less associated with one chapter in the book and includes reference designs targeting FPGA boards that are distributed through the Xilinx University Program.

 

As you might expect, the complexity of each project increases as you read along, which means that the book is intended to be read sequentially. Using this approach, the reader can see, for example, how the optimizations of the HLS approach are directly applicable to a specific application. And each application further explains how to write HLS code. However, there are drawbacks to the teach-by-example approach. First off, most applications require some additional background to give the reader a better understanding of the computation being performed. Truly understanding the computation often requires an extensive discussion of the mathematical background of the application. That may be off-putting to a reader who just wants to understand the basics of HLS, but Neuendorffer believes that such a deep understanding is necessary to master the code restructuring that is necessary to achieve the best design.

 

Although the chapters in “Parallel Programming for FPGAs” are arranged to be read sequentially and grow in complexity as the reader moves along, a more advanced HLS user can read an individual chapter if he or she only cares to understand a particular application domain. For example, a reader interested in generating a hardware accelerated sorting engine can skip ahead to Chapter 10 without necessarily having to read all of the previous chapters.

 

Xilinx strongly embraces HLS as an effective design process for developing FPGA integrated circuits to build hardware that works smartly and effectively in the fields of automotive, aircraft, satellite and other emerging technology. “Parallel Programming for FPGAs” will be an effective and essential guide for developing such products going forward. Keep it within reach on the desk in your lab.

 

Capture.png

 

Matrix-vector multiplication architecture with a particular choice of array partitioning and pipelining.

The pipelining registers have been elided and the behavior is shown at right.

Xilinx ML Suite Adds “Award-Winning” to its Name

by Xilinx Employee ‎05-23-2018 12:18 PM - edited ‎06-06-2018 02:47 PM

 

By Dale Hitt, Director of Strategic Market Development at Xilinx

 

We are honored that Xilinx ML Suite received the 2018 Vision Product of the Year award for the best cloud technology at the Embedded Vision Summit this week in Santa Clara, California.

 

Xilinx ML Suite enables developers to easily integrate accelerated machine learning (ML) inference into their current applications. What is particularly innovative about the Xilinx ML Suite is that cloud users of ML inference can easily achieve more than an order of magnitude better performance and cost savings over a CPU-based infrastructure without the custom development.

 

Traditional datacenter processors have not been able to keep up with compute-intensive workloads running in today’s cloud, such as machine learning, genomics, and video transcoding. The ML Suite delivers a dramatic improvement in machine learning inference performance with uniquely adaptable Xilinx technology.

 

Xilinx ML Suite already works on major cloud platforms such as Amazon EC2 F1 in numerous regions in the US and Europe. It supports popular machine learning frameworks such as Caffe, MxNet, and Tensorflow, as well as Python and RESTful APIs. Applications that utilize the ML Suite can be deployed in both cloud and on-premise environments.

 

In sum, Xilinx ML Suite delivers low-latency, high-throughput, and power-efficient machine learning inference for real world applications.

 

Cloud Technology Vision Product of the Year Award 

Xilinx’s Nick Ni (right) accepts the Cloud Technology Vision Product of the Year Award

from Jeff Bier, founder of the Embedded Vision Alliance.
Photo courtesy of EVA.