We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Showing results for 
Search instead for 
Did you mean: 

Adam Taylor’s MicroZed Chronicles Part 138: Linux, Device Trees, and Zynq SoC PL builds

Xilinx Employee
Xilinx Employee
0 0 42.6K


By Adam Taylor


Having completed the “hello world” program on our Zynq-based Snickerdoodle system last week (see “Adam Taylor’s MicroZed Chronicles Part 137: Getting the Snickerdoodle to say “hello world” and wireless transfer”), we’ll now look more deeply into how we can exploit the capabilities of the Zynq-7000 SoC’s PL (programmable logic) side with the Linux OS.


We have looked at Linux previously, including Xilinx’s PetaLinux. However, looking back I see that there are a few areas that I want to cover in more depth. So before showing you how to build a Zynq SoC PL configuration and an appropriate Linux OS for the Snickerdoodle, I am going to quickly review what we need to do for the general case.


We first look at what we need to do to create a Zynq design that exploits the PL’s hardware capabilities while running a Linux OS. To run this we need the following:


  • First-stage bootloader - Generated by Xilinx SDK.
  • Bit file – Generated by Vivado.
  • Second-stage bootloader – Uboot, loads the image and the root file system.
  • Root file system – Here we have two options: a ramdisk image or a file system on a separate partition on the boot medium.
  • Kernel image – Can be prebuilt or regenerated from source.
  • Device tree blob - Identifies the hardware configuration to the kernel.


The procedures for generating the PL bit file and the first-stage bootloader are the same regardless of which operating system we wish to use. However, the remaining tasks will be new if we have not developed for Linux before.


Rather helpfully Xilinx provides everything we need prebuilt (kernel image, uboot, ramdisk, etc.) with each new version of Vivado. We can obtain prebuilt versions of these files from the Xilinx Linux Wiki for the Zedboard, ZC702, and ZC706 dev boards. Using these prebuilt kernel, ramdisk, and uboot files with an updated device tree that represents the PL design in the bit file is a good way to get our system up and running quickly. To use this approach, we need to check that the prebuilt kernel contains the drivers for our PL design. We can find the list of drivers here.


To make the Linux OS familiar with the hardware, we use the device tree blob, which details the memory, interrupts, locations, etc. of the hardware connected to the processor. When we develop a bare-metal application, we need to generate a Board Support Package (BSP) that contains details of the drivers required and address locations. The device tree blob does something similar to the BSP but does it for Linux. We can generate the raw device tree source (DTS) file within either Microsoft Windows or Linux. However, we can only compile the DTS into the DTB using a Linux installation.


We need to download the device tree plug-in for the Xilinx SDK from the Xilinx github to generate the DTS file. We can also get all of the other files to build the kernel, uboot, etc. from the same GitHub repository.


Once we have downloaded the device tree compiler onto our computer, we need to add it as a repository into which the SDK will generate the DTS from our hardware platform specification. In my system I added this as a global repository. With the plug-in installed, we will see a new device_tree option under file type when we select File -> New -> Board Support Package. For Linux applications, we wish to generate a device tree file.


The example hardware platform specification for this demo is simple and connects the PS to the ZedBoard’s LEDs via an 8-bit AXI_GPIO module.






This process takes your hardware platform from Vivado (importing it as we previously have done for bare-metal builds) and creates the device tree source. As Vivado generates this file, it will ask you to define any boot arguments. For the pre-built Zedboard, these arguments are:


console=ttyPS0,115200 root=/dev/ram rw earlyprintk






This process generates several files:


  • dts – Contains the boot arguments and main system definitions.
  • dsti – Called up by system.dts and contains all the definitions for the PL hardware memory-mapped devices.
  • Zynq-7000.dsti – Called up by system.dts and contains all the definitions for the wider PS system.


We then use the device tree compiler to convert these files into a compiled device tree blob that we can use in our system. We must use a Linux machine to do this.


The first thing to do within our Linux environment is to download the device tree compiler. If you do not already have it, use the command:


sudo apt-get install device-tree-compiler



Once this is installed, we can compile the device tree source using the command:



dtc -I dts -O dtb <path> system.dts -o <path>devicetree.dtb



The dtb (device tree blob) is device-independent and does not require cross compilation for the ARM architecture.



With the device tree compiled, we can then create a boot image (boot.bin) using a first-stage bootloader based on the hardware platform and the prebuilt uboot.elf.






We can then put the boot.bin, devicetree.dtb, ramdisk, and kernel image files on an SD Card, insert it into the ZedBoard, and the Linux OS should boot successfully. For this example, the PL design has an AXI_GPIO module connected to the LEDs on the ZedBoard. If all is working properly, we will be able to toggle the LEDs on and off.


There are also other ways to check for successful mapping to the PL hardware. For example, we can connect to the ZedBoard using WinSCP and explore the file system of the Linux OS running on the ZedBoard. To see that our PL device has been correctly picked up, we can navigate to the directory /sys/firmware/devicetree/base/amba_pl/ where we’ll see the GPIO module and the address range that Vivado assigned to it.


If we wish to test the functionality of the GPIO driving the LEDs using SSH, we can access the ZedBoard and issue commands to control the LED status. We find these within the /sys/class/gpio/ directory where there are two exported GPIOchips. The LEDs are connected to the first one, which ranges from IO 898 to 905. These I/O addresses correspond to the eight LEDs. We can work out the GPIO size by looking in the gpiocchipXXX directories and examining the ngpio file.


We can quickly test the GPIO by turning on the LEDs using the commands in the screen shot below:






The code is available on Github as always.


If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.




 MicroZed Chronicles hardcopy.jpg



  • Second Year E Book here
  • Second Year Hardback here




 MicroZed Chronicles Second Year.jpg

Tags (3)