UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Adam Taylor’s MicroZed Chronicles, Part 124: The Avnet MicroZed Embedded Vision Kit (EVK)

by Xilinx Employee on ‎03-28-2016 10:31 AM (42,869 Views)

 

By Adam Taylor

 

So far on this journey we have focused mostly on how we can generate video with the Xilinx Test Pattern Generator IP block on the Avnet MicroZed Embedded Vision Kit’s (EVK) 7-Inch VGA touch display. We looked at how we can restrict this video chain to the Zynq-7000 SoC’s PL (programmable logic) or we can involve the Zynq SoC’s PS (processor system) and use memory-mapped processing to run algorithms using the VDMA.

 

Now is the point we have all been waiting for. We are going to build the Vivado design for the EVK that uses the Python 1300C camera.

 

 

Image1.jpg

 

Frame Grab of completed Camera looking out of my Office Window

 

 

Rather helpfully, Avnet has provided several blocks on their GitHub account that help us build out video system and, even more helpfully, the account contains software and drivers we can use as well.

 

 

  • Python 1300C Driver – Triggers and receives the image data from the sensor
  • Python 1300C SPI Interface – required as payload length is none power of 2
  • HDMI Output – interfaces to the HDMI output device

 

 

(There are also instructions describing how to build a complete EVK project using a few TCL scripts, but I want to build something similar using Vivado to demonstrate the concepts a little more thoroughly, although it will be very similar.)

 

Along with these IP modules, we will also require the following from the standard Xilinx IP Library:

 

 

  • AXI Interconnects – one for the Zynq SoC’s High Performance (HP) ports and one for the General Purpose (GP) ports
  • AXI VDMA – Provides high-bandwidth direct memory access between memory and AXI4-Stream type video target peripherals
  • Video in to AXIS – Converts a parallel video into an AXI Stream
  • Color Filter Array interpolation – Required to convert from the Raw video format to RGB
  • RGB to YCRCB – Conversion of color space to 4:4:4 YCbCr
  • Chroma Resampler – Converts from 4:4:4 to 4:2:2 YCbCr
  • Video Timing Controller – Generation of the video output timing
  • AXIS to Video Out – Conversion from AXI Stream to parallel video for output
  • Processor Reset Blocks – reset the AXI links

 

 

Of course we will also require a Zynq PS block configured with one GP Master port, one HP slave port, FCLK0 at 75MHz, FCLK1 at 150MHz, and FCLK2 at 200MHz. We will also need a Clock Wizard to generate both the very accurate 108 and 200MHz clocks needed. FCLK0 and FCLK1 are not so demanding. This is why we use a Clock Wizard for the 108MHZ clock—because neither the I/O, DDR, nor ARM PLL can achieve the required frequency with the clocking scheme used by the Zynq SoC’s PS.

 

The first thing we need to do is arrange the clocking structure. My first decision was to make all the slow configuration interfaces that use AXI-Lite clock from the slowest clock, which is FCLK0.

 

We need the 200MHz clock to allow the Python camera interface to work correctly. The image is received down four high-speed LVDS channels, per this reference.

 

 

The Python image sensor generates an output image of 1280 pixels by 1024 lines, which requires a pixel rate of 108MHz. The Python camera interface and the video to AXIS input use this 108MHz clock as the video input clock. What is also needed is a clock rate for the AXIS streaming interface that ensures the throughput required.

 

To make things simple and to reduce the required buffering, the AXI Stream clock must be at least equal to the pixel rate. However, we must consider the throughput of all modules within the processing chain. While most of the modules are capable of processing one pixel per clock, it is wise to have some margin. Consequently, I used 150MHz for the AXIS stream, which provides sufficient bandwidth for the frames we are transferring.

 

With the clocking complete, the architecture becomes very straightforward and very similar to the systems we have previously created using the test pattern generator. However, the software becomes a little more complicated as we need to safely drive the Python image sensor. We will look at the architecture on that board next week.

 

Meanwhile, here is the input half of our Vivado-based video design using the Avnet EVK:

 

 

Image2.jpg 

Input half of the Vivado design

 

 

And here is the output half of our Vivado-based video design using the Avnet EVK:

 

 

Image3.jpg

Output and PS half of the block Diagram

 

 

Once we have looked at the software, I will show you how to create an SDSoC-based platform built upon this hardware and the Avnet Embedded Vision Kit.

 

The code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 

 MicroZed Chronicles Second Year.jpg

 

 

You also can find links to all the previous MicroZed Chronicles blogs on my own Web site, here.

 

Comments
by Scholar vanmierlo
on ‎07-25-2017 07:57 AM

The Python image sensor generates an output image of 1280 pixels by 1024 lines, which requires a pixel rate of 108MHz.

 

Could you explain this a bit more? The datasheet states that the maximum Master clock frequency is 72MHz. Further, if I look in the XDC files I only find a period of 3.7ns (54MHz) for IO_PYTHON_CAM_clk_out_p. And I fail to see how the resolution requires this specific clock.

by Observer taylo_ap
on ‎07-25-2017 02:30 PM - last edited on ‎07-25-2017 02:45 PM by Xilinx Employee

Hi vanmierlo

 

The 108 MHz actually comes form the output to timing over HDMI (DVI) for a 1280 by 1024 @ 60 Hz the pixel clock rate is 108 MHz. 

 

For that reason the design also uses 108 MHz to recover the pixels from the serial links and input them through the parallel to AXIS convertor. Which runs at 150 MHz to allow for processing overhead etc. The image sensor is not being clocked at 108 MHz. It transmits it pixels down a serial links which are then received and processed on the Zynq. It is that Zynq-based processing which is extracting the pixels from the fifo at 108MHz.

 

Regards


Adam 

 

by Scholar vanmierlo
on ‎07-26-2017 02:33 AM

Ah, OK thank you. So the 108 MHz stems from the HDMI output requirement.

 

So the camera sends a clock of 540 MHz (not 54 MHz, sorry for the typo) which the iserdes divides down to 108 MHz in a BUFR?.

 

What I still don't understand is how this does not fail to meet timing on a -1 Zynq. When I set input delay constraints it always fails for me.

set_input_delay -clock vita_ser_clk 1.900 -max [get_ports IO_PYTHON_CAM_data*]
set_input_delay -clock vita_ser_clk 1.800 -min [get_ports IO_PYTHON_CAM_data*]
set_input_delay -clock vita_ser_clk 1.900 -max [get_ports IO_PYTHON_CAM_data*] -clock_fall -add_delay
set_input_delay -clock vita_ser_clk 1.800 -min [get_ports IO_PYTHON_CAM_data*] -clock_fall -add_delay

And what I'm also missing is how the IDELAY is configured and how bitslip is covered.

by Observer taylo_ap
on ‎07-26-2017 12:03 PM

Actually the iserdes does not divide down the 540 MHz to get the 108MHz, this is generated by the clock wizard. 

 

Re the iserdes/idelay and configuration it is a little indepth, the Python outputs a training sequence which is used by the camera reciever interface to align the input correctly. 

 

The training sequence sent by the python is 3A6

Capture.PNG

The camera reciever block is also set to adjust the iserdes / idelay until the training pattern is detected, this setting of the training pattern is by application SW. 

 

Capture1.PNG

 

The SW application also sets the inital manual tap position 25 in the example I used, this is all performed over the AXI interface on the camaera reciever block. 

 

I suggest you take a little time and explore the source code of the camera reciever, if you click on the module in the block diagram and click on edit in ip packager this will open the source code for the module with hierarchy and you can navigate through it. It is pretty simple to understand.

 

Re constraints the demo from avnet I based this off just constrained the clock it did not set IO delays are you sure you need them?

 

If you want to talk this through more indpeth shoot me an email to adam@adiuvoengineering.com 

 

 

 

 

 

 

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.