We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Showing results for 
Search instead for 
Did you mean: 

Adam Taylor’s MicroZed Chronicles, Part 124: The Avnet MicroZed Embedded Vision Kit (EVK)

Xilinx Employee
Xilinx Employee
0 4 110K


By Adam Taylor


So far on this journey we have focused mostly on how we can generate video with the Xilinx Test Pattern Generator IP block on the Avnet MicroZed Embedded Vision Kit’s (EVK) 7-Inch VGA touch display. We looked at how we can restrict this video chain to the Zynq-7000 SoC’s PL (programmable logic) or we can involve the Zynq SoC’s PS (processor system) and use memory-mapped processing to run algorithms using the VDMA.


Now is the point we have all been waiting for. We are going to build the Vivado design for the EVK that uses the Python 1300C camera.





Frame Grab of completed Camera looking out of my Office Window



Rather helpfully, Avnet has provided several blocks on their GitHub account that help us build out video system and, even more helpfully, the account contains software and drivers we can use as well.



  • Python 1300C Driver – Triggers and receives the image data from the sensor
  • Python 1300C SPI Interface – required as payload length is none power of 2
  • HDMI Output – interfaces to the HDMI output device



(There are also instructions describing how to build a complete EVK project using a few TCL scripts, but I want to build something similar using Vivado to demonstrate the concepts a little more thoroughly, although it will be very similar.)


Along with these IP modules, we will also require the following from the standard Xilinx IP Library:



  • AXI Interconnects – one for the Zynq SoC’s High Performance (HP) ports and one for the General Purpose (GP) ports
  • AXI VDMA – Provides high-bandwidth direct memory access between memory and AXI4-Stream type video target peripherals
  • Video in to AXIS – Converts a parallel video into an AXI Stream
  • Color Filter Array interpolation – Required to convert from the Raw video format to RGB
  • RGB to YCRCB – Conversion of color space to 4:4:4 YCbCr
  • Chroma Resampler – Converts from 4:4:4 to 4:2:2 YCbCr
  • Video Timing Controller – Generation of the video output timing
  • AXIS to Video Out – Conversion from AXI Stream to parallel video for output
  • Processor Reset Blocks – reset the AXI links



Of course we will also require a Zynq PS block configured with one GP Master port, one HP slave port, FCLK0 at 75MHz, FCLK1 at 150MHz, and FCLK2 at 200MHz. We will also need a Clock Wizard to generate both the very accurate 108 and 200MHz clocks needed. FCLK0 and FCLK1 are not so demanding. This is why we use a Clock Wizard for the 108MHZ clock—because neither the I/O, DDR, nor ARM PLL can achieve the required frequency with the clocking scheme used by the Zynq SoC’s PS.


The first thing we need to do is arrange the clocking structure. My first decision was to make all the slow configuration interfaces that use AXI-Lite clock from the slowest clock, which is FCLK0.


We need the 200MHz clock to allow the Python camera interface to work correctly. The image is received down four high-speed LVDS channels, per this reference.



The Python image sensor generates an output image of 1280 pixels by 1024 lines, which requires a pixel rate of 108MHz. The Python camera interface and the video to AXIS input use this 108MHz clock as the video input clock. What is also needed is a clock rate for the AXIS streaming interface that ensures the throughput required.


To make things simple and to reduce the required buffering, the AXI Stream clock must be at least equal to the pixel rate. However, we must consider the throughput of all modules within the processing chain. While most of the modules are capable of processing one pixel per clock, it is wise to have some margin. Consequently, I used 150MHz for the AXIS stream, which provides sufficient bandwidth for the frames we are transferring.


With the clocking complete, the architecture becomes very straightforward and very similar to the systems we have previously created using the test pattern generator. However, the software becomes a little more complicated as we need to safely drive the Python image sensor. We will look at the architecture on that board next week.


Meanwhile, here is the input half of our Vivado-based video design using the Avnet EVK:




Input half of the Vivado design



And here is the output half of our Vivado-based video design using the Avnet EVK:




Output and PS half of the block Diagram



Once we have looked at the software, I will show you how to create an SDSoC-based platform built upon this hardware and the Avnet Embedded Vision Kit.


The code is available on Github as always.


If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.




 MicroZed Chronicles hardcopy.jpg



  • Second Year E Book here
  • Second Year Hardback here




 MicroZed Chronicles Second Year.jpg



You also can find links to all the previous MicroZed Chronicles blogs on my own Web site, here.


Tags (3)