UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Displaying articles for: 04-09-2017 - 04-15-2017

 

Later this month at the NAB Show in Las Vegas, you’ll be able to see several cutting-edge video demos based on the Xilinx Zynq SoC and Zynq UltraScale+ MPSoC in the Omnitek booth (C7915). First up is an HEVC video encoder demo using the embedded, hardened video codec built into the Zynq UltraScale+ ZU7EV MPSoC on a ZCU106 eval board. (For more information about the ZCU106 board, see “New Video: Zynq UltraScale+ EV MPSoCs encode and decode 4K60p H.264 and H.265 video in real time.”)

 

Next up is a demo of Omnitek’s HDMI 2.0 IP core, announced earlier this year. This core consists of separate transmit and receive subsystems. The HDMI 2.0 Rx subsystem can convert an HDMI video stream (up to 4KP60) into an RGB/YUV video AXI4-Stream and places AUX data in an auxiliary AXI4-Stream. The HDMI 2.0 Tx subsystem converts an RGB/YUV video AXI4-Stream plus AUX data into an HDMI video stream. This IP features a reduced resource count (small footprint in the programmable logic) and low latency.

 

Finally, Omnitek will be demonstrating a new addition to its OSVP Video Processor Suite: a real-time Image Signal Processing (ISP) Pipeline Subsystem, which can create an RGB video stream from raw image-sensor outputs. The ISP pipeline includes blocks that perform image cropping, defective-pixel correction, black-level compensation, vignette correction, automatic white balancing, and Bayer filter demosaicing.

 

 

 

Omnitek ISP Pipeline Subsystem.jpg

 

 

Omnitek’s Image Signal Processing (ISP) Pipeline Subsystem

 

 

 

 

Both the HDMI 2.0 and ISP Pipeline Subsystem IP are already proven on Xilinx All Programmable devices including all 7 series devices (Artix-7, Kintex-7, and Virtex-7), Kintex UltraScale and Virtex UltraScale devices, Kintex UltraScale+ and Virtex UltraScale+ devices, and Zynq-7000 SoCs and Zynq UltraScale+ MPSoCs.

 

 

 

By Adam Taylor

 

 

When we look at the peripherals in the Zynq UltraScale+ MPSoC’s PS (processor system), we see several which, while not identical to the those in the Zynq-7000 SoC, perform a similar function (e.g. the Sysmon, I2C controller etc.). As would be expected however, there are also peripherals that are brand new in the MPSoC. One of these is the Real-Time Clock (RTC), which will be the subject of my next few blogs.

 

The Zynq UltraScale+ MPSoC’s RTC is an interesting starting point in this examination of the on-chip PS peripherals as it can be powered from its own supply PSBATT to ensure that the RTC functions when the rest of the system is powered down. If we want that feature to work in our system design, then we need to include a battery that will provide this power over the equipment’s operating life.

 

 

 

Image1.jpg 

 

The Zynq UltraScale+ MPSoC’s RTC needs an external battery to operated when the system is powered down

 

 

 

As shown in the figure above, the Zynq UltraScale+ MPSoC’s RTC is split into the RTC Core (dark gray rectangle) and the RTC Controller (medium gray “L”). The RTC Core resides within the battery power domain. The core contains all the counters needed to implement the timer functions and includes a tick counter driven directly by the external crystal oscillator (see clocking blog).

 

At the simplest level, the tick counter determines when a second has elapsed, incrementing a second counter. The operating system uses this second counter to determine the date from a specific reference point known by the operating system. The second counter is 32 bits wide so it can count for 136 years. If necessary, we can set the seconds counter to a known value as well once the low-power domain is operational.

 

To ensure timing accuracy, the RTC provides a calibration register that can correct timing errors every 16 seconds due to static inaccuracies caused by the crystal’s frequency tolerance. At some point, your application code can determine the RTC’s timing inaccuracy based on an external timing reference (like a GPS-derived time, for example) and then use the computed inaccuracy to discipline the RTC by setting the calibration register.

 

 

 

 

Image2.jpg 

 

The Zynq UltraScale+ MPSoC’s RTC incorporates a calibration register for clock-crystal compensation

 

 

 

The RTC can generate an interrupt once every second when it’s fully powered. (There’s no need for clock interrupts when the RTC is running on battery power because there’s no operational processor to interrupt.) The Zynq UltraScale+ MPSoC’s ARM processor that’s controlling the RTC should have this interrupt enabled so that it can correctly manage it.

 

During board testing and commissioning, we can use an RTC register bit to clock the counters in place of the external crystal oscillator. This is of interest if we want to ensure that alarms occur at set values but we do not want to wait for the long time they would normally take to occur if we waited for the oscillator ticks. The other approach is to use a different value for the alarms during testing, which requires a different load of the application software and is not representative of the actual code.

 

When it comes to selecting an external crystal for the RTC, we should select either a 32768Hz or a 65536Hz crystal. If the part selected has a 20 PPM tolerance, the RTC’s calibration feature allows us to achieve better than 2 PPM if we use the 32768Hz crystal or 1 PPM if we use the 65536Hz crystal. (We get more calibration resolution with the faster crystal.)

 

We need to use the RTC Controller to access and manage the RTC core. The controller provides the ability to control and interact the RTC Core once the low-power domain is powered up. We also configure the interrupts and alarms to be generated within the RTC controller. We can set an alarm to occur at any point within the 136-year range of the second counter.

 

I should also note that battery power is only required when the PS main supplies are not powered. If the main supplies are powered, then the battery does not power the RTC Core. We can use the ratio of the time the system is powered up to the time it spends powered down to correctly size the battery.

 

In the next blog, we will look at the software we need to write to configure, control, and calibrate the RTC.

 

 

My code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

NMI, a non-profit organization dedicated to improving electronic engineering and manufacturing in the UK, is organizing a one-day, machine-vision event for May 18 titled “Implementing Machine Vision with FPGA & SoC Platforms.” MBDA Missile Systems is hosting the event in its Stevenage location in the UK. (That’s roughly midway between London and Cambridge for those of us who are geographically challenged.)

 

Key themes for the event will include: OpenCV with FPGAs and SoCs, ADAS, Robotic Guided Vision/Drones, Industry 4.0, Defense, and Machine Learning.

 

Register here.

 

 

As a follow-on to last month’s announcement that RFEL had supplied the UK’s Defence Science and Technology Laboratory (DSTL) with two of its Zynq-based HALO Rapid Prototype Development Systems (RPDS), RFEL has now announced that DSTL has contracted with a three-company team to develop an adaptive, real-time, FPGA-based vision platform “to solve complex defence vision and surveillance problems, facilitating the rapid incorporation of best-in-class video processing algorithms while simultaneously bridging the gap between research prototypes and deployable equipment.” The three company team includes RFEL, 4Sight Imaging, and team leader Plextek.

 

The press release explains, “This innovative work draws together the best aspects of two approaches to video processing: high performance, bespoke FPGA processing supporting the computationally intensive tasks, and the flexibility (but lower performance) of CPU-based processing. This heterogeneous, hybrid approach is possible by using contemporary system-on-chip (SoC) devices, such as Xilinx’s Zynq devices, that provide embedded ARM CPUs with closely coupled FPGA fabric. The use of a modular FPGA design, with generic interfaces for each module, enables FPGA functions, which are traditionally inflexible, to be dynamically re-configured under software control.”

 

 

RFEL HALO RPDS.jpg
 

 

HALO Rapid Prototype Development Systems (RPDS)

 

 

 

 

  • For more information about the broad range of hardware, software, and development-tool technologies for vision-system development in the Xilinx reVISION stack, click here.

 

 

Gilles Garcia, Xilinx’s Communications Business Lead, recently appeared on TelecomTV in connection with the ETSI 5G Network Infrastructure Summit, held in Sophia Antipolis just outside of Nice, France on April 6. TelecomTV’s Director of Content asked Gilles about the 5G networking challenges he’s seeing as Xilinx works with the major 5G infrastructure equipment suppliers. Gilles answered with three major groups of challenges:

 

  • Standards evolution: According to 3GPP, you should expect to continue to see changes to existing 5G standards as the industry gains experiences with 5G network build outs.

 

  • Power: It continues to be hard to achieve the required 5G network performance levels within the available power envelopes.

 

  • Ripple effect: Adoption of 5G networking standards has a ripple effect of requiring backhaul and fronthaul improvements and the introduction of all-new concepts to the mobile network such as adding machine learning to the mobile edge to automatically accommodate usage ebb and flow.

 

 

Here’s the TelecomTV video interview:

 

 

 

 

 

By Adam Taylor

 

 

Having introduced the Aldec TySOM-2 FPGA Prototyping Board, based on the Xilinx Zynq SoC, and the face detection application running on it, I thought it would be a good idea to take a more detailed examination of the face-detection application’s architecture.

 

The face detection example uses one Blue Eagle camera, which is connected to the Aldec FMC-ADAS card. The processed frames showing the detected face are output via the TySOM-2 board’s HDMI port. What is worth pointing out is that the application running on the TySOM-2 board, face detection in this case, is enabled by the software. The Zynq PL (programmable logic) hardware design provides the capability to interface with the camera, for sharing the video frames with the Zynq PS (processing system) through the DDR SDRAM, and for display output.

 

Any application could be implemented—not just face detection. It could be object tracking. I could be corner detection. It could be anything. This is one of the things that makes development of image-processing systems on the Zynq so powerful. We can use the same base platform on the TySOM-2 board and customize the application in software. Of course, we can also use the Xilinx SDSoC development environment to further accelerate the algorithm into the TySOM-2 platform’s remaining resources to increase performance.

 

The Blue Eagle camera transmits the video stream using a, FPD-Link III link. These links use a high-speed, bi-directional CML (Current Mode Logic) link to transfer the image data. An FPD-Link III receiving device (a TI DS90UB914Q-Q1 FPD-Link III SER/DES) is used on the ADAS FMC to implement this camera interface. This device is configured for the application in hand using the I2C peripheral in the Zynq SoC’s PS. This device provides video to the Zynq PL in a parallel format: the parallel data bits, HSync, VSync, and a pixel clock.

 

 

Image1.jpg 

 

 

We need to process the frames and store them within the Zynq PS’ DDR SDRAM using Video DMA (Direct Memory Access) to ensure that we can access the image frames within DDR memory using the Zynq SoC’s ARM Cortex-A9 processor. We need to use several IP blocks that come as standard IP within Vivado to implement this. These IP blocks transfer data using the AXI streaming protocol--AXIS.

 

Therefore, the first thing needed is to convert the received video in parallel format into an AXIS stream. Once the video is in the correct format, we can use the VDMA IP block to transfer video data to and from the Zynq PS’ DDR SDRAM, where the software running on the Zynq SoC’s ARM Cortex-A9 processors can access the frames and implement the application algorithms.

 

Unlike previous examples we have examined, which used a single AXI High Performance (AXI HP) port, this example uses two of the Zynq SOC’s AXI HP interface ports, one in each direction. This configuration requires a slightly more complicated DMA architecture because we’ll need two VDMA IP Blocks. Within the Zynq PL, the AXI standard used for most IP blocks is AXI 4.0 while the ports on the Zynq SoC implement AXI 3.0. Therefore, we need to use an AXI Interconnect or a protocol convertor to convert between the two standards.

 

 

Image2.jpg

 

 

 

This use of two interfaces will make no performance difference when compared to a single HP AXI interface because the S0 and S1 AXI HP Ports on the Zynq SoC which are used by this configuration are multiplexed down to the M0 port on the memory interconnect and finally connected to the S3 port on the DDR SDRAM controller. This is shown below in the interconnection diagram from UG585, the TRM for the Zynq SoC.

 

 

 

Image3.jpg 

 

 

Once the VDMA is implemented, the design then perform color-space conversion, chroma resampling, and finally passes to an on-screen display module. Once this has been completed, the video stream must be converted from AXIS to parallel video, which can then be output to the HDMI transmitter.

 

With this hardware platform completed, the next step is to write the software to create the application. For this we have the choice of using SDK or using SDSoC, which adds the ability to accelerate some of the application algorithm functions using programmable logic. As this example is implemented on the Zynq Z-7100 SoC, which has a significant amount of free, on-chip programmable resources following the implementation of the base platform, we’ll be using SDSoC for this example. We will look at the software architecture next time.

 

My code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

MicroZed Chronicles hardcopy.jpg 

  

 

  • Second Year E Book here
  • Second Year Hardback here

 

MicroZed Chronicles Second Year.jpg 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.