UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Adam Taylor’s MicroZed Chronicles Part 149: Grabbing the Image—Interfacing to camera, Camera Link

by Xilinx Employee on ‎09-26-2016 10:02 AM (5,209 Views)

 

By Adam Taylor

 

Having examined how we can quickly and easily develop image-processing cores using Vivado HLS, I thought it would be a good idea to examine how we can interface actual image sensors to the Zynq SoC so that we can obtain an image that we can process.

 

At a high level, we can break down the interface into one of two different categories:

 

  • Camera Interface – We wish to interface to the video output port of an existing camera. In this case, the output video may use a protocol like Camera Link, USB or GigE Vision.
  • Image Sensor Interface – We wish to interface directly to the image sensor. In this case, the interface may be a parallel bus or a high-speed LVDS bus depending upon the sensor chosen.

 

This is where the flexibility of the Zynq SoC really comes in handy. The ability to use the embedded peripheral cores in the Zynq SoC’s PS and the programmable I/O and logic in the Zynq SoC’s PL allow you to interface your design to any camera or any sensor and to create a tightly integrated system. The programmable nature of these interfaces means that you can use the Zynq SoC to create a vision platform for many varied camera and image-processing designs, and several commercial camera vendors have done exactly that.

 

If we are interfacing to a camera with a USB or GigE Vision video interface, we can use the I/O peripherals in the Zynq SoC’s PS. Images captured over these interfaces can then be routed via the central interconnect of the PS directly into attached DDR memory. Once the image is stored in memory, we can transfer the image from the SDRAM to the Zynq SoC’s PL for processing using VDMA over a high-performance AXI port.

 

Should the interface to the camera or the device use a lower-level I/O protocol, we can implement the required interface in the Zynq SoC’s PL. These lower-level interfaces typically provide frame and line valid signals along with pixel data. The way these signals are encoded varies, which adds some complexity to the design.

 

The simplest of these interfaces is a parallel CMOS interface, which provides frame-valid and line-valid signals along with the pixel values in a parallel form, as shown below:

 

 

Image1.jpg

Simple Parallel Video Interface

 

 

However, as we increase the frame rate of the image sensor, the use of a parallel CMOS output becomes challenging due to increased signal rates and we usually must use a serialized approach to I/O like Camera Link or LVDS.

 

Using either Camera Link or serialized LVDS requires that we de-serialize the channels to extract the required information, which involved replicating a parallel structure of pixel value and the frame- and line-valid signals internal to the FPGA as we got directly from the sensor using a parallel interface.

 

Camera Link comes in three different standards—Base, Medium and Full—providing 2.04, 4.08, and 5.44 Gbps respectively. The base configuration employs four serialized LVDS channels and an LVDS clock running at 85MHz. This interface transfers 24 pixels and 4 framing bits. The Medium and Full versions of the Camera Link interface each introduce another four LVDS Links each so that the Full version has 12 LVDS links.

 

Camera Link achieves high data rates by serializing data at a rate of 7:1 and transmitting it over 4 LVDS links. The final LVDS link provides the clock, as shown below: 

 

 

Image2.jpg

 

LVDS Camera Link Serialization

 

 

When we receive data over a Camera Link interface, we need to de-serialize the five LVDS lines and extract the pixel data in the correct order.

 

We know the serialisation is 7:1 so we can use one of the MMCMs (mixed-mode clock managers) provided by the Zynq SoC to generate a clock running at 7x the Camera Link clock frequency. However, we still need a framing reference to properly align the received data. Luckily, in the case of Camera Link, we can use the Camera Link clock as the framing reference.

 

To convert the four LVDS data channels from serial to parallel, we can use the ISERDES2 provided in the Zynq SoC’s I/O structure. Using the ISERDES2, we can provide the parallel clock and the higher speed serial clock, and generate a parallel output of as many as 8 bits. (If necessary, we can chain ISERDES2 blocks together for larger parallel outputs.) We need seven outputs for the Camera Link interface, as the serialization is 7:1, so we can de-serialize the interface using only one ISERDES2 for each of the LVDS data channels and the one for the clock.

 

We use the ISERDES2 from the clock to provide the framing signal. When we have the correct relationship between the received Camera Link clock signal with the generated clock frequency running at 7x the input frequency, the output from this ISERDES2 block will be the pattern “1100011”.

 

We can use a simple state machine to look for this pattern while incrementing or decrementing the phase of the high-speed clock until the correct pattern is detected. Once this pattern is detected, we can than extract the line, frame, and pixel value data from the remaining four ISERDES2 blocks.

 

Here’s a block diagram of the Base Camera Link receiver design based on the above discussion:

 

 

Image3.jpg

 

Example Base Camera Link Receiver

 

 

We can also take a similar approach to transmitting Camera Link data using an MMCM and OSERDES2 to perform the parallel-to-serial conversion.

 

While this example uses a Camera Link interface, the same general approach can be used for many serialized I/O applications that provide a signal we can use as a framing reference. Next week we will look at applications that do not provide a framing reference but instead provide a training pattern.

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.