UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Hyperspectral GigE video cameras from Photonfocus see the unseen @ 42fps for diverse imaging applications

by Xilinx Employee ‎06-18-2015 11:53 AM - edited ‎01-06-2016 02:00 PM (31,923 Views)

Photonfocus Hyperspectral Camera.jpg

 

The just-announced hyperspectral Photonfocus MV1-D2048x1088-HS02-96-G2 GigE video camera produces a 42-frames/sec video stream based on 25 spectral pass bands from 600nm to 975nm (from orange/yellow through near infrared), resulting in a 10-bit grayscale video representation. This hyperspectral GigE video camera is based on an IMEC snapshot mosaic CMV2K-SSM5x5-NIR sensor, which in turn is based on the CMOSIS CMV2000 2048x1088-pixel CMOS HD image sensor.

 

 

 

Read more...

$1000 bounty! The Open Camera Projects wants you to marry Raspberry Pi Camera, Zynq, Parallella

by Xilinx Employee ‎06-04-2015 10:37 AM - edited ‎06-04-2015 10:52 AM (28,155 Views)

One-man juggernaut Andreas Olofsson, the man behind Adapteva’s many-core Epiphany parallel processing chip and the Zynq-based Parallella supercomputer board, knows how to get things done: offer cold, hard cash. One of the many things Olofsson wants to see done with the Parallella board is image and vision processing and he’s come up with a rather simple way to get the hardware and drivers he needs to make that happen.

 

He’s offering you $1000 to come up with a way to connect the inexpensive Raspberry Pi Camera Module (single-unit price is $25 at Allied Electronics in the US with 2635 units in stock at the moment, or order it on Amazon.com) to the Parallella board and to create the drivers needed to turn Parallella into a vision-processing machine. To do this, you’ll need to connect the 5Mpixel camera module to the Parallella board’s Zynq SoC via its MIPI interface.

 

 

Raspberry Pi Camera Module.jpg

 

5Mpixel Raspberry Pi Camera Module

 

 

Need a head start? Got one for you. Here’s a Xilinx app note for connecting a MIPI camera module to the LVDS ports on a Zynq SoC: “XAPP894 D-PHY Solutions.” You’ll also want to read this Xcell Daily blog post: “How to Drive Multiple Live Cameras and Displays for Pennies—A Free IEEE Spectrum webinar.” And here's some nice analysis of the camera module's MIPI CSI-2 interface.

 

Good hunting!

 

On Thursday, June 18, you can watch a free, 1-hour Webinar hosted by IEEE Spectrum on developing machine-vision systems using the world-class VisualApplets graphical application programming environment—which greatly eases the development of all types of vision systems—and the new Avnet Smart Vision Development Kit based on the PicoZed SOM and the Xilinx Zynq Z-7015 SoC.

 

 

Avnet Smart Vision Dev Kit.jpg

 

 

The Webinar takes place at 11:00 am Eastern Time (US), 15:00 UTC/GMT but you need not watch it live. The Webinar will be available on demand on June 19 and will be available for a year.

 

Register here.

 

Note: The IEEE Educational Activities department is now offering participants who have attended an IEEE Spectrum Webinar the opportunity to earn PDH's. The certificate form is here but you’d best watch the Webinar first.

The Apical Spirit engine can create virtualized digital representations of important features in video frames at 30fps from 1080p60 HD video using as many as sixteen classifier models with an unlimited number of objects detected per classifier model. Minimum object size within the video frame is a relatively small 60x60 pixels. The only way to achieve this incredible detection rate is to use multipliers—a lot of multipliers. According to Apical’s VP of Product Applications Judd Heape, the Spirit engine uses 600 of the 900 multipliers in the programmable logic section of a Xilinx Zynq Z-7045 SoC running at 300MHz to operate in real time at the above video and detection frame rates. The design can scale to use more multipliers if more performance is required.

 

By comparison, a GPU is 30x slower and consumes 10x the power according to Heape. “This is only possible in an FPGA,” he says. No other off-the-shelf part can handle the computation load.

 

 

Apical Spirit Engine.jpg 

 

Read more...

Peel me a grape—and then watch the da Vinci surgical robot suture the grape back together

by Xilinx Employee ‎05-21-2015 10:20 AM - edited ‎05-26-2015 01:53 PM (27,998 Views)

Intuitive Surgical has worked with Xilinx since 2003 to speed the delivery of increasingly advanced da Vinci robotic surgical system capabilities to operating rooms. Multiple generations of the da Vinci system have heavily relied on Xilinx devices going all the way back to the days of the Virtex-2 Pro FPGA to deliver the speed and flexibility required from a finely controlled robotic surgical system. (Note: The latest 16nm Virtex UltraScale All Programmable devices are now many, many generations ahead of those 0.13μm Virtex-2 Pro days.)

 

According to David Powell, Principal Design Engineer for Intuitive Surgical, “As we started using the Xilinx device, we discovered it to be quite a nice design platform—so nice, in fact, that follow-on platforms have evolved to employ dozens of Xilinx FPGAs in all of the main system components. Our first board to employ a Xilinx FPGA was up and running in two hours. After that, we found we could get a board up and running in just minutes—these kind of results are almost unheard of.”

 

If you want to know more about how Intuitive Surgical is using Xilinx All Programmable devices to speed product development, improve performance, and reduce systems costs (in horse racing, that’s called a “trifecta”), then take a look at “Medical Robotics Improve Patient Outcomes and Satisfaction.”

 

I’m more of a seeing is believing kind of guy, so here’s a 2-minute video showing a da Vinci surgical robot suturing up the skin of a grape—which someone has obviously peeled—and there’s a surprise ending you won’t want to miss:

 

 

 

 

 

 

From the article about Intuitive Surgical in Xcell Journal, Issue 77:

 

Powell pointed to a close partnership with Xilinx’s technical staff, sales force and executives as another key to success. “We know Xilinx devices backwards and forwards now, and this really helps us make a difference in many lives,” he said. “It always comes back to the patients. We hear from people every day who tell us how a new procedure changed or saved their life. That’s what motivates us to deliver the best technology.”

 

Do you need to handle multiple image-sensor families when developing Smarter Vision systems? Here’s how

by Xilinx Employee ‎05-19-2015 10:07 AM - edited ‎05-28-2015 07:01 AM (26,269 Views)

The typical Embedded Vision system must process video frames, extract features from those processed frames, and then make decisions based on the extracted features. Pixel-level tasks can require hundreds of operations per pixel and require hundreds of GOPS (giga operations/sec) when you’re talking about HD or 4K2K video. Contrast that with the frame-based tasks, which “only” require millions of operations per second but the algorithms are more complex. You need a hardware implementation for the pixel-level tasks while fast processors can handle the more complex frame-based tasks. This explanation is how Mario Bergeron, a Technical Marketing engineer from Avnet, launched into his presentation at last week’s Embedded Vision Summit 2015 in Santa Clara, California.

 

 

Bergeron Embedded Vision Summit 2015 fig 1.jpg

 

 

Read more...

Gaze tracking makes jump from assistive technology niche to mainstream with the help of a Zynq SoC

by Xilinx Employee ‎05-18-2015 02:59 PM - edited ‎05-26-2015 01:43 PM (31,654 Views)

Applications that help people with needs are a special pleasure to blog and this blog’s all about using technology to help people overcome tremendous challenges. The technology is gaze tracking, as embodied in the EyeTech Digital Systems’ AEye eye tracker. This technology performs a seemingly simple task: figure out where someone is looking. The measuring techniques have been known since 1901. Implementation? Well that’s taken more than 100 years of development and EyeTech has been at the forefront of this work for almost two decades. Here’s the current gaze-tracking process flow used by EyeTech:

 

 

 Eyetech Gaze Tracking Process Flow.jpg

 

 

 

Originally, EyeTech used commercial analog video cameras and PCs to create a “Windows mouse” that could be controlled with nothing more than eye positioning. EyeTech’s eye-tracking technology determines gaze direction from pupil position and 850nm IR light reflections from the human cornea. The major markets for this technology were originally for disabled users who needed assistive technology to more fully interact with the world at large. These disabilities are caused by numerous factors including ALS, cerebral palsy, muscular dystrophy, spinal cord injuries, traumatic brain injuries, and stroke. Eye-tracking technology makes a large, qualitative difference in the lives of people affected by these challenges. There’s an entire page full of video testimonials to the transformative power of this technology on the EyeTech Web site.

 

However important the assistive technology market, it’s relatively small and Robert Chappell, EyeTech’s founder, realized that the technology could have far more utility for a much larger user base if he could reduce the implementation costs, size, and power consumption. Here were Chappell’s goals:

 

  • Stand-alone operation (no PC needed)
  • “Compact” size
  • Low power (< 5W)
  • Low cost (< $200)
  • Superior eye-tracking capability
  • Multi-OS support
  • Field upgradeable
  • Reasonable development time and costs

 

These are not huge hurdles for typical embedded systems but when your algorithms require a PC to handle the processing load, these goals for an embedded version present some significant design challenges. No doubt, Chappell and his team would have used a microcontroller if they could have found a suitable device with sufficient processing horsepower. But with the existing code running on PC-class x86 processors, shrinking the task into one device was not easy.

 

Chappell learned about the Xilinx Zynq SoC at exactly the right time and it seemed like exactly the right type of device for his project. The Zynq SoC’s on-chip, dual-core ARM Cortex-A9 MPCore processors could run the existing PC-based code with a recompilation and deliver an operable system. Then, Chappell’s team could gradually move the most performance-hungry tasks to the Zynq SoC’s on-chip PL (programmable logic) to accelerate sections of the code. Porting the code took two years and the team size varied from two to four engineers working part time on the project.

 

Ultimately, the project resulted in a product that can track a gaze at frame rates ranging from 40 to 200+ frames/sec. Many gaze-tracking applications can use the slower frame rates but certain applications such as testing for brain injuries requires the faster frame rate for an accurate result.

 

Here’s a photo of the resulting AEye pc board:

 

 

Eyetech AEye Module with Zynq Z-7020.jpg

 

 

This is a fairly small board! A Zynq Z-7020 SoC measures 17mm on a side and the board is only slightly taller than the Zynq SoC package. Note the US dime shown on the right of the above image for a size comparison. Here’s a hardware block diagram of the AEye board:

 

 

Eyetech AEye Module Block Diagram.jpg

 

 

 

And here’s how EyeTech has apportioned the tasks between the Zynq SoC’s PS (processor system) and PL:

 

 

Eyetech Gaze Tracking Task Allocation.jpg

 

 

Chappell notes that the availability of a high-performance PS and PL in the Zynq SoC made for an ideal rapid-development environment because the boundary between the hardware and software is not rigid. The ability to move tasks from the PS to the PL is what permitted the design team to achieve better than 200 fps frame rates.

 

How mainstream could this gaze-tracking technology get? How about one such eye tracker per car to help fight driver fatigue; chemically induced inattention; and distraction from cell phones, tablets, and the like? If proven effective, insurance companies may soon be lobbying for this feature to be made mandatory in new cars. Science fiction? Just watch this video from Channel 3 TV news in Mesa, AZ.

 

 

(Note: This blog is a summary of a presentation made by Robert Chappell and Dan Isaacs of Xilinx at last week’s Embedded Vision Summit 2015, held in Santa Clara, CA.)

 

The best way to demo SoC IP for video? Cadence says “Xilinx”

by Xilinx Employee ‎05-15-2015 04:06 PM - edited ‎05-26-2015 01:54 PM (23,902 Views)

What does Cadence do when it wants to demo its IVP image/video processor and MIPI IP? It builds a Xilinx-based FPGA emulation platform, of course. (It takes way too long and costs far too much to build an SoC for a demo.) Pulin Desai of Cadence was at this week’s Embedded Vision Summit 2015 in Santa Clara, California and I captured this quick video of the Cadence IVP in action, performing real-time face detection and sending the resulting annotated video stream to an LCD using the company’s MIPI IP.

 

 

 

 

 

 

The Cadence demo platform is based on Xilinx All Programmable silicon including two Artix-7 FPGAs and a third device hidden under a heat sink and fan. The Cadence MIPI IP is physical IP, so it is implemented as a small custom IC on a small red daughtercard for this demo.

 

Note: You can implement MIPI interfaces directly with nothing more than a few resistors using Xilinx devices, see “Swipe these Low Cost FPGA-based MIPI DSI and CSI-2 Interfaces for Video Displays and Cameras.”

 

 

VectorBlox Matrix Processor IP for FPGAs accelerates image, video, other types of processing

by Xilinx Employee ‎05-15-2015 03:04 PM - edited ‎05-26-2015 01:55 PM (25,783 Views)

You know what it’s like when you connect with a professor who’s really, really good at explaining things? That’s how I felt talking to Guy Lemieux, who is both the CEO of VectorBlox, an embedded supercomputing IP vendor, and a Professor of Electrical and Computer Engineering at the University of British Columbia. We met at this week’s Embedded Vision Summit in Santa Clara, California where I got a fast education in matrix processor IP in general and a real-time demo of the VectorBlox MXP Matrix Processor IP core. (See the video below).

 

The VectorBlox MXP is a scalable, soft matrix coprocessor that you can drop into an FPGA to accelerate image, vision, and other tasks that require vector or matrix processing. If your system has a lot of such processing to handle and needs real-time performance, you’ve got three choices—design paths you might take:

 

  1. HDL design using Verilog or VHDL
  2. High-level synthesis using a C-to-gates compiler like Xilinx Vivado HLS
  3. Use a vector co-processor to boost performance

 

Path number 1 is the traditional path taken by hardware designers since HDLs became popular at the end of the 1980s. In the early 1980s, nascent HDL compilers weren’t that great at generating hardware, commonly described as having poor QoR—Quality of Results. Many designers back then either felt or said outright, “I'll give you my schematics when you pry them from my cold, dead hands.”

 

You don’t see many systems being designed with schematics these days. Systems have gotten far too complicated and HDLs represent a far more suitable level of abstraction. Tools change with the times.

 

First-generation high-level synthesis tools, as embodied in Synopsys’ Behavioral Compiler, met with similar resistance and they didn’t go very far. However, design path number two has become viable as HLS compilers have improved. You can now find a growing number of testimonials to the effectiveness of such tools, like this one from NAB 2014.

 

Design path number three has the compelling allure of a software-based, quick-iteration design approach. Software compilation remains faster than HDL-based hardware compilation followed by placement and routing but depends on using a processor with appropriate matrix acceleration—not really the purview of the usual RISC suspects.

 

Matrix processing is exactly what the VectorBlox MXP is designed to do.

Read more...

This week at the Embedded Vision Summit, Teradeep demonstrated real-time video classification from streaming video using its deep-learning neural network IP running on a Xilinx Kintex-7 FPGA, the same FPGA fabric you find in a Zynq Z-7045 SoC. Image-search queries run in data center servers usually use CPUs and GPUs, which consume a lot of power. Running the same algorithms on a properly configured FPGA can reduce the power consumption by 3x-5x according to Vinayak Gokhale, a hardware engineer at TeraDeep, who was running the following demo in the Xilinx booth at the event:

 

 

 

 

 

Note that this demo can classify the images using as many as 40 categories simultaneously without degrading the real-time performance.

Among the many demos at this week’s Embedded Vision Summit held at the Santa Clara Convention Center was a demonstration of a Zynq-based development workflow using MathWorks’ Simulink and HDL Coder to create a fully operational, real-time pedestrian detector based on the HOG (Histogram of Oriented Gradients) algorithm. The model for this application was developed entirely in MathWorks’ Simulink and the company’s HDL Coder generated the HDL code for implementing the HOG algorithm’s SVM (support vector machine) classifier in the programmable logic section of a Xilinx Zynq SoC. The Xilinx Vivado Design Suite converted the HDL into a hardware implementation for the Zynq SoC.

 

This design takes real-time HD video, processes the video in the Zynq SoC’s programmable-logic implementation of the SVM classifier, and passes the results back to the Zynq SoC’s dual-core ARM Cortex-A9 MPCore processor, which annotates the video stream and then outputs the result.

 

Here’s a video of the demo, presented by MathWorks’ Principal Development Engineer Steve Kuznicki at the Embedded Vision Summit:

 

 

 

 

 

 

Enabling a JPEG 2000 Network for Professional Video

by Xilinx Employee ‎05-05-2015 11:46 AM - edited ‎05-26-2015 01:44 PM (26,454 Views)

BarcoSilex Logo.jpg 

By Jean-Marie Cloquet, Manager, Image Processing Division, Barco Silex

 

 

(Excerpted from the latest issue of Xcell Journal)

 

 

 

Because of its superior quality, JPEG 2000 has emerged as the standard of choice for the compression of high-quality video, including the transport of video in the contributing networks of television broadcasters. As a result, suppliers of video equipment have started adding JPEG 2000 encoders and decoders to a variety of transport solutions, supporting various interfaces and sometimes even using proprietary protocols.

 

JPEG 2000 supersedes the older JPEG standard and offers many advantages over its predecessor or other popular formats such as MPEG. By 2004, JPEG 2000 had become the de facto standard format for image compression in digital cinema through the Hollywood-backed Digital Cinema Initiatives (DCI) specification. The possibility of a visually lossless compression makes JPEG 2000 ideal for security, archiving and medical applications.

 

Read more...

Wide-angle lenses usually create barrel or pin-cushion distortion and this distortion creates time-consuming challenges for image-stitching applications. A new Ultra-Low-Latency Distortion Correction hardware IP block from RFEL corrects these distortions in real time with minimum theoretical latency for raster-based imaging. The Ultra-Low Latency Distortion Correction IP is available now for Xilinx Artix-7 and Kintex-7 FPGAs and Zynq SoCs. Here’s an example showing a distorted input image and the corrected output image from this IP:

 

 

RFEL Distortion Correction IP input image.jpg

 

 

Distorted wide-angle image in need of barrel-distortion correction

 

 

 

 

RFEL Distortion Correction IP corrected output image.jpg

 

 

Corrected wide-angle image generated by RFEL Ultra-Low-Latency Distortion Correction IP

 

 

RFEL is a Certified member of the Xilinx Alliance Program.

Embedded Vision Summit Logo 2.jpg

With embedded vision suddenly becoming a surprisingly ubiquitous technology in a breathtakingly short time, you may ask yourself, “How do I work this?” Right now, most people think it’s IMPOSSIBLE to implement cost-effective, vision-based technology into most embedded equipment.

 

 

They are very, very wrong.

 

 

It’s not impossible—in fact it’s not even that hard thanks to low-cost lenses and image sensors—and incorporating Smarter Vision into many systems greatly enhances their effectiveness by making them far more aware of their surroundings and their operators. The real secret to doing this is to get help now from vendors that already know how to make intelligent, low-cost vision systems work in a variety of end applications. Xilinx is one such vendor that has considerable experience and many successful customers with production vision-based systems already working in the field.

 

Answers to be had at the Embedded Vision Summit next month, May 12, at the Santa Clara Convention Center where you can learn what works and then apply what you learn. Normally, the price of admission for the full Summit is $249 and $99 for entry to the Technology Showcase alone. However, Xilinx is participating in the Summit, so you can qualify for a fabulous 20% discount simply by using the promo code EVS15XLX. Click here to register and be sure to stop by the Xilinx booth, if only to say “thanks.”

 

 

A Dual-MIPI FMC board was among the many new products jointly introduced by fidus and inrevium at this month’s NAB 2015. The board is a design aid for system developers and experimenters interested in working with the MIPI CSI-2 and DSI D-PHY specifications. The TB-FMCL-MIPI FMC card plugs into an LPC FMC connector, commonly found on many Xilinx FPGA and Zynq SoC eval boards, and level-translates between an FPGA’s or SoC’s LVDS and low-speed CMPS pins and MIPI CSI-2 and DSI D-PHY ports at rates to 2.5Gbps per lane using Meticom MC20901 (CSI-2) and MC20902 (DSI) translator ICs. Here’s a block diagram of the board:

 

 

 

fidus MIPI FMC Board.jpg

 

 

 

 

And here’s a photo of the board:

 

 

fidus MIPI FMC Board photo.jpg

 

 

Last week at NAB 2015, I stopped by the Barco Silex booth to see what was new. What I saw was this golden statue from the Academy of Television Arts & Sciences. It’s a Technology and Engineering Emmy Award for Barco-Silex’s work on the JPEG2000 video-compression standard.

 

 

 Barco Silex Emmy 2015.jpg

 

 

The award recognizes Barco Silex and several other technology companies in the Video Services Forum (VSF) for their work on the advancement of interoperability for JPEG2000 video-compression systems. (See “Barco Silex accepts Technical Emmy at CES for JPEG2000 compression IP—Available for Xilinx All Programmable devices.”)

 

Barco Silex JPEG2000 IP cores are available for use with Xilinx 7 series All Programmable devices including Virtex-7, Kintex-7, Artix-7 FPGAs and Zynq SoCs as well as low-end Spartan-6 FPGAs.

 

 

PathPartner Hybrid HEVC/H.265 Video Decoder for Zynq SoC optimizes performance through HW/SW partitioning

by Xilinx Employee ‎04-21-2015 02:14 PM - edited ‎05-26-2015 01:45 PM (25,610 Views)

Last week at NAB 2015, PathPartner demonstrated an ultra-low-latency hybrid HEVC/H.265 Video Decoder that optimizes performance by intelligently distributing the decoder’s task blocks between one of the on-chip ARM Cortex-A9 processors in the Zynq SoC’s Processor System (PS) and the on-chip programmable logic (PL) elements. The PS implements the HEVC Decoder’s parsing module and the PL executes computationally intensive task blocks. This distributed, hybrid decoder architecture is only possible because of the immense data bandwidth available between Zynq SoC’s PS and PL—thanks to the on-chip AXI interconnect fabric, which allows real-time transfer of the video-stream data back and forth between PS and PL.

 

Task-block allocation within the hybrid HEVC/H.265 decoder looks like this:

 

 

PathPartner HEVC H.265 Video Decoder for Zynq SoC.jpg

 

PathPartner Hybrid HEVC/H.265 Video Decoder task partitioning

 

 

The PathPartner HEVC/H.265 video decoder completely complies with Main Profile, level 4.1 HEVC ITU-T H.265 and ISO/IEC 23008-2 standards and can decode a 1080p30 video stream in real time on a Zynq Z-7045-2 SoC.

 

 

The advent of 4K video and the development of 12G-SDI, which can carry 4K video over one coax, is creating a niche for converters that can convert 12G-SDI into four 3G-SDI streams and Fidus showed me a small board based on a Xilinx Kintex-7 FPGA that does exactly that at last week’s NAB 2015. Here’s a photo of that board:

 

 

Fidus 12G-SDI to Quad 3G-SDI Converter Board.jpg

 

 

 

Just a day later at the show, Fidus came by the Xilinx booth showing the board packaged in a nice case:

 

 

Fidus 12G-SDI to Quad 3G-SDI Converter.jpg

 

 

Now there’s certainly a need for this converter, but there’s also a need for a converter that goes the other way—converting four 3G-SDI streams into one 12G-SDI stream—because many 4K cameras currently source video over four 3G-SDI ports. Minor board layout changes to swap in SerDes receivers for SerDes transmitter pairs on the 3G side and some FPGA reprogramming are all that’s needed (plus a different silkscreen for the case) because the design already has a 12G-SDI output, located in the upper left corner of the above image of the encased product. This ease in changing the product from one function to another is a hallmark of FPGA-based design.

 

 

Last week’s NAB 2015 show introduced me to a high-speed serial digital video interface, previously unknown to me, called V-By-One HS. I observed this interface in action at the inrevium/fidus booth where I saw a demo of a DisplayPort 1.2a to V-By-One transmitter and receiver implemented with two inrevium ACDC (Acquisition, Contribution, Distribution, and Consumption) 1.0 base boards based on Xilinx Kintex-7 FPGAs, a couple of V-By-One FMC cards, and inrevium’s V-By-One HS IP core. Here’s a photo of the setup at NAB 2015:

 

 

Fidus Inrevium Display Port to V-by-One Video Converter 2.jpg

 

 

Read more...

Got a heavy duty 8K or 4K video hardware project? Pressed for development time? Need to get to market before the competition? The inrevium/fidus team has a development platform for you. It’s called, quite simply, the 8K/4K Video Development Platform (formal part number TB-KU-060-ACDC8K) and it’s based on a Xilinx Kintex UltraScale KU060 All Programmable FPGA. There’s also 4Gbytes of DDR4 SDRAM on the board along with four SFP+ cages for optical modules and seven (!) FMC connectors of varying abilities.

 

Here’s a photo of the board shot last week at NAB 2015 in Las Vegas:

 

 

Fidus Inrevium Kintex UltraScale Development Board.jpg

 

inrevium/fidus 8K/4K Development Platform for Xilinx Kintex UltraScale FPGA

Read more...

Killer Artix-7 FPGA Hardware Video Development Platform unveiled at NAB 2015

by Xilinx Employee ‎04-20-2015 11:33 AM - edited ‎05-26-2015 01:57 PM (26,731 Views)

The inrevium/fidus booth was tucked way in the back of the North Hall of the Las Vegas Convention Center at NAB 2015 last week. It was packed with interesting development platforms for Xilinx All Programmable devices including this killer Artix-7 FPGA development platform called the ACDC 7. (The formal part number is TB-A7-200T-IMG.) This development product, like several more in the inrevium/fidus booth, is the result of an international collaboration between two Premier members of the Xilinx Alliance Program: inrevium, a division of Tokyo Electron Device Limited offering FPGA-based solutions for design teams, and high-end design house Fidus Systems. I’m going to devote several blog posts to these immensely interesting development boards, but this one’s about the ACDC 7 based on the Artix-7 FPGA.

 

Here’s a photo of the board shot last week at NAB 2015:

 

 

Fidus Inrevium Artix-7 Development Board.jpg

 

 

inrevium/fidus ACDC 7 Development Platform for Xilinx Artix-7 FPGA

 

Read more...

OmniTek’s ULTRA 4K Video Tool Box: With great power comes great responsibility

by Xilinx Employee ‎04-17-2015 04:01 PM - edited ‎05-26-2015 01:58 PM (26,137 Views)

One of the most helpful things I saw for broadcast equipment developers this week at NAB 2015 was the OmniTek ULTRA 4K Video Tool Box, expressly designed to help you migrate your designs from HD to 4K video and from 3G-SDI to 12G-SDI digital video interfacing. The toolbox includes a multifaceted video generator, format converter, and analyzer. The toolbox generator can produce stills and video streams in all video standards from SD up to 4K from imported images, line patterns, zone plates, and combinations. The format converter provides automatic up/down/cross conversion between the full range of video input and output standards and includes multi-stream raster reconstruction for the various 4x3G (SQ & 2SI), 6G and 12G UHD video standards. The analyzer includes a multi-format picture viewer, physical layer analysis for SDI, status monitoring and error checking, gamut meters, CRC generation and checking, and video data analysis. The ULTRA 4K Tool Box hardware is based on a Xilinx Zynq Z-7045 SoC.

Read more...

CoreEL—Masters of the Video Codec Universe show a variety of video codec IP running at NAB 2015

by Xilinx Employee ‎04-16-2015 01:53 PM - edited ‎05-26-2015 01:58 PM (25,973 Views)

A visit to the CoreEL Technologies booth at NAB 2015 was a real eye opener for me with respect to the number of available video codecs. The two back walls of the booth were dedicated to large-screen video demos of several FPGA-based video codecs running on a variety of Xilinx FPGA Eval Kits including:

 

 

  • An HDp60 HEVC decoder running on an Xilinx Artix-7 low-end FPGA

 

CoreEL HDp60 HEVC Decoder based on Artix-7.jpg

 

 

 

  • A 4K H.264 Decode running on a Kintex-7 FPGA:

 

CoreEL 4K H.264 Decoder based on Kintex-7.jpg

 

 

 

  • A 4K H.264 and a super-low-latency HDp60 AVC Intra-Frame Codec

 

CoreEL 4K H.264 Intra-Frame Codec on Kintex-7.jpg

 

 

 

In addition, CoreEL was showing an FPGA-based, packaged AV decoder module with integrated heat sink based on a low-end Xilinx Spartan-6 FPGA:

 

 

CoreEL AV Decoder Module.jpg

 

 

Because CoreEL’s video codecs are based on programmable FPGA technology, they’re highly configurable so they can be customized to meet specific requirements.

Last month, I-MOVIX announced its X10 UHD RF Ultra Motion super-slow-motion video camera system, which uses a wireless link between camera and processing unit to create a portable HD camera system that can shoot 3000 frames/sec at 720p or 2000 frames/sec at 1080i.

 

 

I-MOVIX X10 UHD RF super slow-motion portable video camera.jpg

 

 

At NAB 2015 this week, I confirmed that the new UHD RF camera system employs Vision Research’s Phantom high-speed digital video cameras driving a super-slo-mo processing box based on Xilinx Virtex-6 FPGAs over a wireless link, shown as a belt pack in the above image.

 

Note: For more information about I-MOVIX UHD camera systems, see “FPGA-based i-movix X10 UHD Ultra Slow-Motion System accepts 1000fps in 4K, 2000fps in HD.”

 

You may think of an SFP cage as an I/O port for optical modules, since that’s how they’re generally used. Embrionix sees things differently. The company’s line of pluggable Video SFP modules use the SFP interface and form factor to create modular, configurable video and IP systems in a cost-efficient way. Members of the Embrionix Video SFP line include 12G-SDI, 6G-SDI, 3G-SDI, HD-SDI, and SD-SDI coaxial video interfaces and SDI-to-IP, HDMI, and composite converters. A key element contributing to the flexible nature of these video SFP modules is a Xilinx FPGA integrated directly into the module that performs both high-speed I/O conversion and video processing such as JPEG2000 compression and decompression in an extremely compact form factor.

 

Here’s a very short video demo shot at the Embrionix booth at this week’s NAB 2015 in Las Vegas. The demo shows I/O conversion from HDMI to SDI to SMPTE ST 2022 over Ethernet to SDI over fiber with some JPEG2000 compression thrown in, all performed by Embrionix IP.

 

 

TICO Lightweight Video Compression’s Superpowers win IABM Game Changer Award at NAB 2015

by Xilinx Employee ‎04-16-2015 08:52 AM - edited ‎05-26-2015 02:00 PM (26,650 Views)

The TICO Lightweight video compression scheme has a very important superpower: it can shoehorn a 12Gbps 4K video stream into a 10Gbps Ethernet port or even a 3Gbps 3G-SDI port with no perceptible loss in visual quality, making it possible to ship 4K video across existing, low-cost broadcast network infrastructure. This superpower makes it very easy for broadcasters to incorporate 4K video equipment into their existing network infrastructure. This week at NAB 2015, the IABM (International Association of Broadcasting Manufacturers) recognized the importance of TICO compression by presenting an IABM Game Changer Award to intoPIX for its TICO compression IP.

Read more...

NGCodec demos HEVC H.265 Real-Time Encoder running on Kintex-7 FPGA at NAB 2015

by Xilinx Employee ‎04-15-2015 09:31 AM - edited ‎05-27-2015 04:45 PM (135,961 Views)

Here’s one of the really interesting demos at the Xilinx booth this week at NAB 2015. It’s a real-time HEVC H.265 I-frame video encoder from NGCodec, demonstrated by NGCodec’s CEO and co-founder Oliver Gunasekara. The short demo video below shows real-time encoding and also has a slick trick mode where you can see how the encoder is performing motion estimation angle prediction in real time. The HEVC codec is underdevelopment and motion estimation will be added in the future.

 

 

 

 

This demo is running on a Xilinx Kintex-7 FPGA. In the video, Gunasekara mentions that NGCodec is developing the full encoder for 4K video and targeting the bigger/better/faster/lower-power Kintex UltraScale FPGA.

 

 

Part of the vision for an IP-based broadcast video network involves the temporal synchronization of all network elements. This is especially important for live video with multiple feeds where all cameras and other video sources must synchronize to the same time. SMPTE ST 2022-6 High Bit Rate Media Transport over IP Network describes the encapsulation and de-encapsulation of video to and from IP networks using the Ethernet RTP (Real-Time Transport) protocol while SMPTE 2059-1 and -2 align the timing practices used in video production facilities with the IEEE 1588 PTP (Precision Time Protocol) standard. These standards are critical to realizing an all-IP broadcast video network.

 

In the following short video Antoine Hermans, CTO of Adeas, demonstrates SMPTE ST 2022 and 2059 IP cores running on Kintex-7 FPGAs aboard Xilinx KC705 Eval Kits. The live demo at this week’s NAB 2015 show in Las Vegas shows the two Xilinx eval kits synchronizing with a PC-based grand master to within about 10 pixels over the 10G Ethernet network.

 

 

 

 

Note: To see a vision of a future where all broadcast video runs over IP-based networks, see “IP-based Broadcast Video: A Vision from Thomas Edwards, VP of Engineering and Development at Fox Networks Engineering & Operations.

 

Thomas Edwards has a vision for broadcast video in the year 2020. In that vision, he walks into the Fox Networks Broadcast Equipment Center and he sees no broadcast equipment. Instead, he sees a standard data center full of servers, Ethernet switches, and Ethernet cabling. Video travels over IP-based networks and video processing is virtualized. New cable channels appear with the swipe of a credit card.

 

Edwards is the VP of Engineering & Development for the Fox Networks Engineering and Operations Advanced Technology Group and his vision sits along the tracks running up ahead into the future. On those tracks is the Ethernet express train, which has been running along these tracks—faster and faster—for more than 30 years. During those three decades, Ethernet and IP-based networking have side-tracked nearly every other networking protocol in multiple markets as IP-based networking capabilities have expanded and gotten faster. Broadcast video, a very demanding application, is one of the few arenas yet to be converted but its time is at hand. In Edwards’ vision, broadcast video’s conversion to IP-based networking occurs over the next five years.

 

Here’s a very short video shot yesterday in the Xilinx booth at NAB 2015 where Edwards sketches out his vision:

 

 

Lunch, Lady Gaga, Hip Hop, and Helicopters in a Las Vegas parking lot—Welcome to NAB 2015

by Xilinx Employee ‎04-14-2015 07:09 AM - edited ‎01-07-2016 03:59 PM (25,445 Views)

As I write this, I’m eating a dry barbeque chicken sandwich at a table with six Chinese media moguls in the parking lot at the Las Vegas Convention Center (LVCC). I have no idea what my lunch companions are saying but that doesn’t matter because there’s a DJ playing Hip Hop music 20 feet away masking conversation in any language. I’m fortunate to be under a tent, which is shielding me from the blazing noon Nevada sun. You can feel the heat coming right through the fabric. The DJ switches to Lady Gaga while an executive from CBS broadcast distribution sits down in the chair next to me to enjoy a cheeseburger. Meanwhile, the Las Vegas Monorail rolls by, past the High Roller Observation Wheel—the Las Vegas interpretation of the London Eye. Welcome to NAB 2015—center of the universe for all things broadcast.

 

High Roller Wheel and Monorail at LVCC.jpg 

 

From the lunch tent at NAB 2015

Read more...

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.