Creating A Ubuntu Xenial 16.04 rootfs for Zybo and Zynq

In one of my previous blog posts we went over how to make a minimal (sort of) root filesystem using buysbox.  This is great is you don’t need a package manager and want to built all our utilities and frameworks from source yourself. But if you would rather use a distribution to install packages and tools then then using a Ubuntu core distribution would be a good option.

Ubuntu base is basically a small Ubuntu root filesystem that only includes a command line interface.  It’s great starting point for any embedded system.  Even if you need a GUI X11 can be installed and configured.  Ubuntu base does not include a kernel, we need to provide that so it’s not as easy as the distributions you’d download and install on a laptop or desktop.

Before we get started please take a look at this page which basically already does what I’m about the explain.

Let’s download the ubuntu 16.04 Xenial for arm from here, we will need to download the following file ubuntu-base-16.04-core-armhf.tar.gz.

Make a directory where we will be creating our root filesystem, this is what I did on my system

mkdir -p zynq_xenial_rootfs

Now we need to uncompress the base system that we downloaded.  We can uncompress it into the directory we just made.

cd zynq_xenial_rootfs
sudo tar xf ubuntu-base-16.04-core-armhf.tar.gz

In our directory there should be the skeleton of the root filesystem with the correct permission since we uncompressed with sudo.  We still need to configure our serial port to have show our terminal output.  We’ll also create a chroot jail to test out our root filesystem and also install an utilities we may need.  To do this we will need to install qemu.

sudo apt-get install qemu-user-static

So you may be wondering why we want to create a chroot jail using qemu? I’ve used this method when I don’t have access to ethernet on my target board.  There may be situations where you can’t connect to wifi or there is no wired network that allows random devices to optain an IP address.  In these cases we can create our chroot jail and install any packages we need to get moving.

sudo cp $(which qemu-arm-static) zynq_xenial_rootfs/usr/bin/

Next, we are going to bind our hosts systems proc directory to our root filesystem.  This simply allows our chroot file system to use the hosts proc directory.  There is no harm in this and we can safely unmount it when we are done.

sudo mount -t proc proc zynq_xenial_rootfs/proc

We also need to set up the resolv.conf file, we will copy the one from our hosts system over.

sudo cp /etc/resolv.conf zynq_xenial_rootfs/etc/resolv.conf

We can now start our simulated chroot jail by executing the following command

 sudo chroot zynq_xenial_rootfs /bin/bash

We should now see the # sign to show we are logged in as root, we can use the exit command at any time to exit the chroot jail.

There are a couple things we will do using the chroot jail that will help when we first boot into our embedded Linux system.  We will set the root password, create a non-root user and install a couple of packages.

First let’s set the root password

passwd root

Enter the password that you’d like to use for root

Now we can create a non-root

adduser ubuntu

Then you’ll be asked to set a password for the new user.  Now that we’ve set up some users you are pretty much ready to use your system.  Since this system is Ubuntu (debian) based we can try using the package manager to install some utilities we will need.  Let’s try install python3,

apt-get install python3
apt-get install wireless-tools
apt-get install vim

I’m assuming you are still logged in as root if not add sudo in front of this.  Install any other packages that your system may need.

One package we will need to install is the udev package.  For some reason it’s not included in the base image and will cause a fair amount of headaches when we are trying to spawn our serial console.  Let’s go ahead and install that, we will see some warnings in the output but we can ignore them.  The warnings are a result of us using a chroot jail.

apt-get -y install udev

In order to log into our system though the uart of the Zybo we need to configure the console login process for ttyPS0 which is UART0 on the ARM processor.  To do this we need to create a file called /etc/init/ttyPS0.conf. 

vi /etc/init/ttyPS0.conf

This file will spawn a console over our uart port on start up, the contents of the file should look like:

start on stopped rc or RUNLEVEL=[12345]
stop on runlevel [!12345]

exec /sbin/getty -L 115200 ttyPS0 vt102

Next we need to add ttyPS0 to the UART section in the file /etc/securetty.  We also need to edit the /etc/fstab file so that our root filesystem is mounted on start up.  Our /etc/fstab file should look like:

/dev/mmc.blk0p2 /   ext4    relatime,errors=remount-ro  0   1

I edited all my files with vi, which is why we installed it in the previous step.  Since we are done with our chroot environment we can type in exit on the command line and we should be back in our proper host system.

All that’s left to do now is to edit a couple of files on the linux and uboot side of things and we are good to go.

First we’ll need to edit the zynq-common.h file in u-boot so that we don’t try to load the initramfs filesystem anymore.

Back in our host environment we can switch into our u-boot source directory. We will need to edit the file include/configs/zynq-common.h

We will need to remove the following lines:

"load mmc 0 ${ramdisk_load_address} ${ramdisk_image} && " \
"bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}; " 

Replace it with the following line :

"bootm ${kernel_load_address} - ${devicetree_load_address}; "  

Now u-boot should look for or try to load the ram disk when it starts up.  We will have to rebuild u-boot and replace it on our sdcard.

On the Linux side we’ll modify the device tree file to change where the rootfs is located.  In the zynq-zybo.dts file we will be changing the boot args.  Open zynq-zybo.dts which is located in the dts directory, and find the line that assigns the bootargs.  Change the boot args to the following.

bootargs = "console=ttyPS0,115200 root=/dev/mmcblk0p2 rw earlyprintk rootfstype=ext4 rootwait devtmpfs.mount=1";

Once we have those changes done we’ll need to recompile the device tree.  Since we’ve already built the kernel once (hopefully) we can run that command again and the devicetree files will be recompiled.

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=<path_to_output_directory>UIMAGE_LOADADDR=0x8000 uImage modules dtbs

If you haven’t built the kernel yet now would be a good time to look at this tutorial.

Now we should copy our u-boot binary and new devicetree binary to the boot partition of our sdcard and we should have a fully working Ubuntu 16.04.  Remember to login using the new passwords we set above.  Now you can use the ubuntu package manager to install any tools that will be needed.

Creating a BusyBox Root Filesystem For Zybo (Zynq)

So far we’ve built u-boot from scratch, built the Linux kernel and built the u-boot SPL so we don’t have to use the Xilinx SDK if we don’t want to.  Our main goal here is to create a embedded Linux system on our Zybo.  Our secondary goal is going to be to add the Xenomai RT patches and create a real time Linux system.  One step that we haven’t gone over yet is creating a root filesystem.

We have a couple choices when it comes to root filesystems, depending on where our embedded system is going to be deployed.  Some smaller system will use a RAM disk as it’s root filesystem.  A ramdisk is a filesystem that is loaded into memory every time the system is started.  This type of file system is not persistent.  This means that any changes or modifications that are made do not survive a reboot.

There are two ramdisks that are commonly used in Linux systems.  The first is the initial ram disk (commonly called initrd).  This is an older method but it’s still supported in the Linux kernel.  When the kernel boots up it will decompress the ramdisk and use it as the root filesystem.  Some Linux systems (included embedded ones) may use this file system to perform some initialization and the pivot to the real root filesystem.  You can google “pivot_root” to see exactly how this is accomplished.  Some embedded systems will continue to use the initial ramdisk instead of loading a persistent one.  Any filesystem changes that we make will be lost on a reboot.  This can be good or bad depending on what we are trying trying to accomplish.  The initrd requires a synthetic block device of fixed size, which restricts the file system from growing without creating a new block device and starting from scratch again.  One draw back of using any RAM disk is that the more libraries and utilities we need in our file system the larger the file system will grow and hence the more RAM it will use.

The initramfs is the preferred (or the most recent) way of creating a ramdisk for your Linux system.  Traditionally initramfs can be built into the kernel itself which makes it very quick and compact.  We don’t need to create a block device to create it which makes it much easier to build.  One draw back is that we don’t want to include anything in the initramfs that can’t fall under the GPL license.  This is because we can include the initramfs into our kernel build therefore making it fall under the GPL.  One way around that is to use the initramfs filesystem but include it externally using the initrd hooks.

The more common approach in hobby based boards is to use the sdcard (or part of it) as the root filesystem and have persistent storage.  This method is much easier when add utilities, libraries and executables to our system.  For learning purposes I’ve chosen to use a ramdisk for my zybo system.   In a later blog post we will also go over how to use a Ubuntu (or Arch) based root filesystem which will be much bigger but give use more flexibility and ease when it comes to included third party libraries.

Moving on to creating or RAM based root filesystem.  Xilinx provided an example of building an initrd file system on there website here, which seems fairly old, so for our purposes we will use the initramfs method and use the initrd hooks to have it included outside of our kernel image.  I’ve taken a lot of information from the above page so it’s still worth a read.

First we will need the Busybox source.  We can download it here, at the time of writing the latest Busybox is 1.26.2.  We can uncompress the tar file into it’s own directory.

tar xvjf busybox-1.26.2.tar.bz2
cd busybox-1.26.2
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- defconfig

So now we’ve uncompressed busybox, and configured it.  The next step is to add any custom configuration that we may need using menuconfig.  If your build environment hasn’t used menuconfig before make sure you have ncurses installed or we will see errors when running this next command.

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- menuconfig

Next we need to set up where we are going to place the busybox executable and the symlinks that go along with it.  Once we are in the menuconfig screen go to busybox settings, installation options and specify a location for busybox installation prefix.  For me I placed this in a directory called zynq_ram_rootfs, and make sure to specify the full path.  Lastly exit menuconfig and save our changes.  Next let’s build our executable.

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- install

We should see some build output, once the build is done we can cd into our install directory and see the the symlinks that were made by the build process.

mkdir dev 
mkdir etc etc/init.d 
mkdir mnt opt proc root 
mkdir sys tmp var var/log

Next we need to remove linuxrc, we are doing this because linux looks for an executable called init when the first process starts up.  We will need to link this to our busybox executable.  Remember for this to work we need to be in the install directory for our root filesystem.

rm ./linuxrc

ln –s ./bin/busybox ./init

If Linux can’t find the init script is should fall back to using an older method of starting up the first user process.  This should include calling linuxrc but I prefer to make the init symlink.  Next we need to create some configuration files that will help get Linux setup using our root filesystem. First we need to create a file named /etc/fstab.

LABEL=/     /           tmpfs   defaults        0 0
none        /dev/pts    devpts  gid=5,mode=620  0 0
none        /proc       proc    defaults        0 0
none        /sys        sysfs   defaults        0 0
none        /tmp        tmpfs   defaults        0 0

This file contains information about all the partitions, block devices and remote file systems.  Here we are mounting each of these directories at startup.

Next we need to create the /etc/inittab file, this file controls what happens whenever a system is booted or when a run level is changed.


# /bin/ash
# Start an askfirst shell on the serial ports


# What to do when restarting the init process


# What to do before rebooting

::shutdown:/bin/umount -a -r

This file is from this Xilinx tutorial, it’s pretty straight forward.  We spawn and ash shell (busybox uses the ash shell) on the UART0 which is ttyPS0 and then we have actions to perform on shutdown and restart.

We also need to create the /etc/init.d/rcS file, this file is the second main boot script.  The rcS file is the run-level file for single user mode.  Because our system only has the root user we are a single user system.


echo "Starting rcS..."

echo "++ Mounting filesystem"
mount -t proc none /proc
mount -t sysfs none /sys
mount -t tmpfs none /tmp

echo "++ Setting up mdev"

echo /sbin/mdev > /proc/sys/kernel/hotplug
mdev -s

mkdir -p /dev/pts
mkdir -p /dev/i2c
mount -t devpts devpts /dev/pts

echo "rcS Complete"

We also need to set this script as executable or we won’t be able to run this script and our linux system won’t be able to do anything useful.

chmod 755 <path_to_rootfs>/etc/init.d/rcS

One of the last steps here in creating our root file system is the create a password file for the root user.  We will need to create the etc/passwd file.


This file maintains the information about each user that can use the system.  If you want to know the meaning of the above line checkout this page.

We now have a basic root filesystem but we are missing libraries that our programs will need to run.  We could go download glibc and build it and install it in our root filesystem.  Or we can copy the libraries from our toolchain, the easiest way to do this (IMHO) is to use the sysroot that is provided by the toolchain vendor.  In our case we can download that from the Linaro site here.

I downloaded the file sysroot-glibc-linaro-2.21-2017.05-arm-linux-gnueabihf.tar.xz.  We can uncompress this file and then install it into our rootfs.

Let’s decompress, move the file into our development directory

tar xf sysroot-glibc-linaro-2.21-2017.05-arm-linux-gnueabihf.tar.xz

We now see a folder called sysroot-glibc-linaro-2.21-2017.05-arm-linux-gnueabihf which contains all of our libraries will need for our system.  Just as a warning, this is throwing the entire kitchen sink into your rootfs, so any lib that the compiler provides and it expects to be in a live system is here.  If you aren’t using some libraries you may want to remove them.  For example if you aren’t using fortran then you may want to consider removing those from the libraries you install in your rootfs.  If you are just writing C programs, libc, libm and a couple of gcc dependencies may be all you need.

When I do the following step I add about 256MB of files to my rootfs, which is great for prototyping, but it isn’t good for a live system that wants to use RAM for program data (remember how a RAM disk works).  By doing this next step we may defeat the purpose of using busybox to create a minimal rootfs.  There are some simple steps we can take to size down the libraries that we are deploying.  These steps may save us 200MB of space in our root filesystem.  Let’s first copy over everything.

cp -rf ./sysroot-glibc-linaro-2.21-2017.05-arm-linux-gnueabihf/* <path_to_busy_box_rootfs>

So if we check the size of the directory that contains our rootfs we should see something close to 256MB for it’s size.  This is very very large and in some cases it may be too large to fit into RAM.  So we are going to need to start reducing it’s size.  The first thing we can do is strip out all the debug symbols.  Using the following command will strip all the debug symbols in the libraries of our rootfs.

arm-linux-gnueabihf-strip <path_to_rootfs>/lib/*
arm-linux-gnueabihf-strip <path_to_rootfs>/usr/lib/*

Since I’m not using fortran I removed it from my lib directory and I also removed the debug directory from lib/ also.

rm <path_to_rootfs>/lib/libgfortran.*
rm -r <path_to_rootfs>/lib/debug

Now we should see out rootfs is about 65MB, these is much more manageable.  If you would like to slim it down even further look through the directories and see if there is anything else that you don’t need and remove it as needed.  Once we have that done we can move on with compressing it.

We need to compress our rootfs and get into the proper cpio format.

cd <path_to_rootfs>
find . | cpio -o --format=newc > <path_to_file>/rootfs.img

Almost done, the last step is to add the u-boot header that u-boot needs to load the rootfs image.  We will also be changing the name since xilinx u-boot will be looking to load a file called uramdisk.image.gz

mkimage -A arm -T ramdisk -C gzip -d rootfs.img uramdisk.image.gz

You’ll need to make sure that you have the mkimage utility installed.  Check with what package this is included in for your distribution.  For example if you are using Ubuntu you would install

sudo apt-get install u-boot-tools

One last thing we need to do is update our linux kernel config with the new size of the ramdisk.  So we’ll have to recompile the kernel once we do this but it’s pretty straight forward.  Open arch/arm/configs/xilinx_zynq_defconfig modify the following line

CONFIG_CMDLINE="console=ttyPS0,115200n8 root=/dev/ram rw initrd=0x00800000,65M earlyprintk mtdparts=physmap-flash.0:512K(nor-fsbl),512K(nor-u-boot),5M(nor-linux),9M(nor-user),1M(nor-scratch),-(nor-rootfs)"

I changed mine from 16M to 65M, there’s no reason that if you are just running C code that you’d need all the libs we included, so you could skip this step and slim down your rootfs.  If you’d like to continue with the larger rootfs with all it’s features then do the above modification (replacing 16M with the size of your rootfs in megabytes) and then follow my previous tutorial on compiling the linux kernel.

That’s it, you should now have a busy box based file system that we can add libraries and utilities to as needed.

Where to next?? If you’ve followed my blog posts you should now have all we need to create a custom embedded Linux distribution.  Next blog post we will put all of our steps together and then boot our system and finally get to building Xenomai 3 patched kernel for Zynq.

Building Mainline Linux for Zynq

One step that we are going to need to do before we build Xeomai 3.0 for Zynq is to build mainline Linux for Zynq.  Why are we doing this??? There’s a blog post talking about using the Xilinx tree???

When it comes to embedded Linux we have some choice when it comes to what Linux tree we want to use.  For most arm based board we will have the choice to build from mainline or from the vendor’s tree.  The main different between mainline and a vendor tree is usually support for a specific chip or SOC.  In our case the Xilins tree will contain support for Xilins SOCs before mainline.  The Xilinx tree will have support for things that mainline has deemed to be too specific or not for general use.  If we look at this link, we can see a list of Linux drivers for Xilinx SOCs and if they have support in mainline or in the xilinx tree.  If we want to program the PL using Linux we need the devcfg driver which is not in mainline.  We have options on how to bring this functionality to mainline but that will be saved for another blog post.

Back to the mainline kernel, so let’s use the stable kernel tree.  These instructions apply to all the mainline kernel trees but for our purpose we will use the stable tree.  Let’s go and clone a copy of that tree:

git clone git://

If we use the following command

git tag -l | less

We should be able to see all the different tags in the kernel tree, what we want to do here is choose a kernel version that will work for our purpose , which is building Xenomai/Linux.  For the Xenomai build we are going to to use the 4.1.18 kernel.  I’ll explain why we choose this specific version when we build our Xeonmai patched kernel but for now let’s just go with it.  To do this we need to execute the following command:

git checkout tags/v4.1.18 -b <name of your branch>

This will create a new local branch for use to use that is based on the 4.1.18 kernel version.

Ok, so we have our source ready, we still need to do two things before we can begin.  The first is get a toolchain.  In the previous posts we’ve been using the Xilinx toolchain that came with the SDK, but for the future we’ll switch to using the Linaro toolchain.  I did hear that Xilinx has switched to using the Linaro toolchain for the new versions of the SDK.  Let’s go ahead and grab gcc 5.4.1.  If you follow the link we should see the Linaro release page for the 5.4.1 arm-linux-gnueabihf toolchain.  Download the correct version for your development environment.  I used the follow file for my setup (Ubuntu Mate)


Let’s go ahead and unzip that file:

tar xf gcc-linaro-5.4.1-2017.01-x86_64_arm-linux-gnueabihf.tar.xz

That should have created a folder called gcc-linaro-5.4.1-2017.01-x86_64_arm-linux-gnueabihf, let’s move that somewhere where every user can access it.

sudo mv ./gcc-linaro-5.4.1-2017.01-x86_64_arm-linux-gnueabihf /opt/

So once that’s in our /opt directory we can added it our PATH so we can use it easily from the command line.  I usually add this to my bashrc file so I don’t have to type it in everytime I open a terminal.

export $PATH

So now we should be able to type arm-linux-gnueabihf-gcc –version and see that our version is 5.4.1 and our new toolchain is ready to use.

In mainline Linux there is no defconfig for Zynq, it’s actually part of the generic ARM-v7 build.  This really isn’t what we want because we don’t really want to build in all the platforms that the generic ARM-v7 defconfig covers.  So we are going to borrow a file from the Xilinx tree.

I’ve created this github repo that will contain any support files and eventually binaries that will allow people to use Xenomai 3.0 on zynq with out all the fun we had here but for now it’s just the zynq defconfig.

This file is just the defconfig from the Xilinx tree, we are going to use it to build our mainline kernel.  Let’s copy it over to our linux tree.

cp ./xeno_zynq/xilinx_zynq_defconfig <path_to_kernel_tree>/arch/arm/configs/xilinx_zynq_defconfig

Now that we have all of our files in place, our toolchain ready to compile, let’s go ahead and build our kernel.  I use an output directory for kernel builds, I find it helps keep things organised and object files in one place.  If you are going to use the same source tree to build multiple platforms then using an output directory is very helpful.  Change directory into the linux source directory and execute the following command.

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=<path_to_output_directory> xilinx_zynq_defconfig

Some things to note here, the ARCH and CROSS_COMPILE flags, we need to tell the make system we are going to be targeting an ARM chip and we also need to tell it what prefix to use when calling the cross compiler.  The O flag here is telling gcc where to put all the output files.  We should see the following output:

ggallagher@ggallagher-virtual-machine ~/devel/emb_linux/linux-stable (zynq_xeno_4.1.18) $ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build/ xilinx_zynq_defconfig
make[1]: Entering directory '/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build'
 GEN ./Makefile
# configuration written to .config
make[1]: Leaving directory '/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build'

The above came from my machine so it won’t look exactly like the above but you should have something similar.  For Zynq we are going to use the exact same command to build the kernel that we used when creating the xilinx tree kernel.

You can do any customisation you need by executing:

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=<path_to_output_directory> menuconfig

This should bring you to menu config which allows you to make any customisation you would like to do using an interactive window.  It’s pretty straight forward just make sure you have ncurses installed or it won’t load.

Once you are finished with any customisation, we are ready to actually build the kernel.

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build/ UIMAGE_LOADADDR=0x8000 uImage modules dtbs

This build will take a while to complete, but once it’s done our image should be located under arch/arm/boot/zImage in either the kernel source tree or in our output directory depending on how you invoked the build.  If you built it like I outlined then you’ll find it in <output directory>/arch/arm/boot/zImage.  We can copy that file to our sdcard and we should see our Zybo boot up.

There you have it, that’s how to build mainline Linux for Zynq.  Next it’s on to patching our kernel with Xenomai 3 and we should have the start of our Realtime Linux distribution.

U-Boot Secondary Program Loader On Zybo

Hi Everyone,

Today I’ll take you through how to create the U-Boot SPL for our Zybo board.  The SPL is able to replace the FSBL but currently may not support secure boot or encrypted bitstream.  The SPL isn’t supported by Xilinx so like I mentioned above it could be missing some features that the FSBL supports.  Let’s get started!

If you haven’t done so before let’s clone the U-Boot repo, for this example we will need the Xilinx repo, to the best of my knowledge mainline u-boot is missing a python script that creates boot.bin (more on boot.bin later).

git clone

We should have the SDK arm cross compiler installed from our previous steps, if you don’t you’ll have to install the Xilinx SDK.  The good news is there is a command line version of the tools that is slightly smaller.  If you’re not sure if the compiler is installed we can check.   If you are using Linux the tools should be located here:

/$(PATH TO SDK TOOLS)/Xilinx/SDK/2015.3/gnu/arm/lin/bin/

If we can see the executable arm-xilinx-linux-eabi-gcc then the tools are installed, the above line will be different depending on the location and the version of the SDK that you have installed.

To create the secondary program loader we need to copy two very important files to the u-boot source tree.  These files are our ps7_init files, these files are extremely important they initialize our processor and are needed so that boot.bin works properly.  If we forget to copy the files over or we don’t copy them to the correct location the build will still work and boot.bin will be generated but it won’t work.  This can be frustrating and is hard to debug.  I’m going to assume that you have ps7_init.c and ps7_init.h files and you either got generic ones from Xilinx git repository ( or you generated them when you exported your hardware design.

We’ll need to copy them to u-boot-xlnx/board/xilinx/zynq/custom_hw_platform.  If we were using another board like a ZedBoard, MicroZed, ZC706 or ZC702 we would need to copy them to another location.  Look under board/xilinx/zynq in the u-boot source tree for more info.  When we copy the files over we need to rename them to ps7_init_gpl.c and ps7_init_gpl.h, the need for the name change is based on what u-boot expects for the files to be named.  I’m not sure if these files need to be under GPL licence to be properly  included in the build if you were going to use this in a commercial product.  I’ll keep researching and hopefully find an answer and post back here when I do.

Let’s go ahead and copy those files, assuming the ps7_init files are in your PWD

cp ./ps7_init.c $(PATH_TO_UBOOT_SRC)/u-boot-xlnx/board/xilinx/zynq/custom_hw_platform/ps7_init_gpl.c

cp ./ps7_init.h $(PATH_TO_UBOOT_SRC)/u-boot-xlnx/board/xilinx/zynq/custom_hw_platform/ps7_init_gpl.h

Make sure we update our new ps7_init_gpl.c to include ps7_init_gpl.h not ps7_init.h which it will be including by default if we are using ones that were generated with Vivado.

Everything is pretty much ready to build, one added bonus is Zybo is now supported by u-boot and is included in the configs directory.  If we look at $(PATH_TO_UBOOT_SRC)/u-boot-xilinx/configs we can see all the supported boards.  We should see zynq_zybo_defconfig, if you don’t then do a git pull and make sure you have the latest source code.

make CROSS_COMPILE=arm-xilinx-linux-gnueabi- zynq_zybo_defconfig

This will configure our build properly and get it ready make our build files.

make CROSS_COMPILE=arm-xilinx-linux-gnueabi-

After this command our build will start and we should see all the files getting compiled and linked.  Once our build is finished we should see the following output:

MKIMAGE u-boot.img
./tools/ -o boot.bin -u spl/u-boot-spl.bin
Input file is: spl/u-boot-spl.bin
Output file is: boot.bin
Using /home/greg/src/emb_linux/u-boot-xlnx/spl/u-boot-spl.bin to get image length – it is 47632 (0xba10) bytes
After checksum waddr= 0x13 byte addr= 0x4c
Number of registers to initialize 0
Generating binary output /home/greg/src/emb_linux/u-boot-xlnx/boot.bin

We see our u-boot image file was created (u-boot.img) and we also see the spl being created, the interesting part of this output is the script.  This script takes u-boot-spl.bin as an input file and creates the boot image, our case it’s boot.bin which is the file the Zybo needs to boot.  The last time I built this using mainline the python script wasn’t called automatically after the build.  I’m not sure if we need to do that manually in the mainline build but using the xilinx version of u-boot it’s done automatically for us which makes life easier.

Now that the build is done we need to copy the following files to the FAT32 partition of our sdcard.  Copy over boot.bin and u-boot.img, we can now put the sdcard back into the Zybo and boot it and we should see u-boot boot into the console.

Please leave any questions or comments and I’ll answer them as soon as I can.


Building stock (Xilinx) Linux For Zynq

Hi Everyone,

Sorry for the delay in posting this, but here is step 4a which is building the stock Linux kernel.  We’ll do the stock Linux kernel first and then move on to the root filesystem, then we’ll come back and build the Xenomai variant for all those who want a real time embedded system.  I’ll add some more pictures to this once I’ve gotten some more time, which is hopefully soon.

Let’s get the Linux kernel source from the Xilinx repo.  Clone the repo:

git clone

This may take a while if you have a slow connection, but once it’s sync’d you now have the Linux kernel source.  If you take a look at the directory in the kernel source <path to kernel source>/arch/arm/configs, we should see all the different configurations we can build.  We are interested in the Zynq ones, we’ll use the xilinx_zynq_defconfig.  Make sure we have exported our CROSS_COMPILER environment variable:

exoport CROSS_COMPILER=arm-xilinx-linux-gnueabi-

Just like when we built u-boot make sure we have the cross compiler in our PATH variable.  Once we’ve done that, we can go ahead and start to compile the kernel, let’s first make sure we have a clean build environment:

make mrproper

let’s configure the kernel to build for zynq:

make ARCH=arm xilinx_zynq_defconfig

If we wanted to build in any custom options we can now run

make ARCH=arm menuconfig

This will run the menuconfig utility and allow us to customize the kernel components we want to build.  If you do run that make sure you’ve saved your changes and once your done run:

make ARCH=arm UIMAGE_LOADADDR=0x8000 uImage

The build should take about 5-10 minutes and once we’re complete we should be able to find the image in linux-xlnx/arch/arm/boot/.

The build may fail if you don’t have mkimage installed so depending on your distro do some searching and find what package you need to install.

So we still have two more steps to complete before we can put this on the sdcard and boot the Zybo.  We need to build a root filesystem and compile the device tree binary blob.  The Linux device tree is extremely interesting and is the way the kernel knows what hardware is present in the system.  It’s worth the time to read up about the Linux device tree, especially if you may be building your own custom board.

If you’d rather build from mainline then check out my post here on how to do that.


Helpful Links:

Building U-boot and boot image

Hi Again,

So in the previous steps we’ve built the bitstream and the first stage bootloader now all we need is to build u-boot and we’ll have something to run on our Zybo.  If you’ve been working on a windows machine you’ll need to switch to a Linux machine for these next steps.  It does look like you can build u-boot on windows using the Xilinx SDK software but I used Linux, maybe building u-boot with Windows will be another blog post.  Once you have a virtual machine running Linux or a system running Linux we’ll need to install a couple items before we can start getting and building u-boot.  First, if you have a 64-bit system you’ll have to install the 32-bit libraries for your Linux distro before we can use the code sorcery toolchain.  This link will show you what commands to use based on the flavour of Linux you’ve chosen.  Next make sure you have git installed, we’ll need to use git to download the sources needed to build u-boot and later Linux and Xenomai.  To install Linux on Ubuntu , go to the terminal and type in:

bash> sudo apt-get install git

on red hat based distro

bash> sudo yum install git

Once we’ve got those tools installed it’s time to get our toolchain from Xilinx.  Follow the steps on the Xilinx wiki follow the steps to download and install the command line tools.  We’ll be building u-boot, linux and xenomai all from the command line.  After completing the toolchain install and adding the xilinx tools into our PATH variables we should be able to type

bash> arm-xilinx-linux-gnueabi-gcc

on the command line and get an error saying no input files specified.

  Screenshot from 2014-03-25 10:02:36

If we see this error then everything is setup and we are ready to download U-boot, if not then you’ve probably forgot to add the toolchain location to your PATH variable.  Take a look back at the Xilinx wiki to make sure you’ve exported bothe CROSS_COMPILER variable and your modified PATH variable.

We are now ready to configure u-boot from source for Zybo.  The Xilinx wiki on u-boot is a great resource to get us started.  It’s gear towards the Zed board which is okay because the Zybo is very similar.  I only had to change one piece of source code to get it to work.  I’ve been able to configure u-boot from both the Zed board config and the generic one. I’ll go over it modifying the Zedboard config.  Let’s fetch the source:

bash> git clone git://

Let’s take a look at the config files, in include/configs we should see a file called zynq_zed.h,  this file is how u-boot knows how to configure the system.  Let’s edit this file for zybo, all we have to do is add the following line:

#define CONFIG_ZYNQ_PS_CLK_FREQ 50000000UL

We need to add this because the peripheral clock on the Zybo is 50Mhz, on the Zedboard it’s 33.3Mhz.  I got u-boot to work with and without this change but I’ll add it in here since we need to keep this in mind when we try to get the Linux debug console to work.

Once we’ve saved that file, let’s configure u-boot to build for our target by entering :

‘make zynq_zed_config’

Screenshot from 2014-03-25 10:45:16

type ‘make’ and we should be able to watch the build output and hopefully see no errors.

Screenshot from 2014-03-25 10:48:23

So u-boot is now built, now we need to gather the bitstream from step 1, the first stage boot loader elf file from step 2 and the u-boot executable into one location that the SDK can see.  So if you are doing this on two machines like I am make sure there is a shared folder somewhere that we can put the files.

If we do an ‘ls’ in the u-boot directory we should see a file with no extension named ‘u-boot’.  Copy this file to a separate location.  Rename this file to u-boot.elf

Open the Xilinx SDK, and under the Xilinx tools drop down menu, select ‘Create Zynq Boot Image’

Screenshot from 2014-03-25 10:53:37

I called the bif file zybo, next add the first stage bootloader first, with the partition type bootlader.  Order is important so make add this file first.  Next add the bitstream file with partition type datafile and then u-boot.elf again add it as datafile.  Make sure you’ve specified an output directory and then click create image.  We should now see u-boot.bin in that directory.  Rename that file BOOT.bin, this is the file that the processor will look for when power is applied.

We have the boot image that will boot the board, all we need now is the create an SD Card to hold the files.  Grab a 4GB (or larger) SD Card, hopefully your system has an SD Card reader or you have a USB one.  Pop the SD Card in and make sure your OS can see it.  We’ll need to use a Linux utility called Gparted to create the partitions on the SD Card.  If you have a Linux distro that allows you to download programs from a software repo, use that to find gparted, if not follow this link for instructions on how install it.

Once Gparted is installed run it and we should be able to see the hard drivers on the system.  We should be able to use the drop down in the top right to find our SD Card.  Once we’ve found it we can go ahead and erase all the current partitions.  WARNING!!! This will erase all the contents of the SD Card so if you have something you want to keep copy somewhere safe before this step! WARNING!!!

Unmount the partition if needs and highlight the current partitions, delete those partitions (right click) and we should see all the space on the SD Card as unallocated.  Click the check mark to apply those changes.  Click the unallocated space and right click and select new, the size of this partition can be small since it only holds the bootfile, but we also could have a RAM disk here so let’s make it atleast 512MB, the file system HAS TO BE FAT32.  Make sure to give it a label name so we can identify it later.  Click add to finish.  Highlight and right click the unallocated space and select new, the size should be the rest of the SD Card, and make the file system type ext4.  We can use this as the rootfs when Linux starts up.  Click the green check mark again and the operations should be applied.  The SD Card is now ready we can eject it safely from the OS.

Insert the SD Card again so the host sees it and then copy over the BOOT.bin file to the FAT32 partition. Safely eject the SD Card, we are almost done.

On the Zybo make sure the boot jumper is set for SD Card boot.  Insert the SDCard, connect the Zybo to the host machine using a USB cable, and apply power.  You should see the green and red LED light up and then some yellow LED activity on the UART to show U-boot is sending data to the UART.

Open a terminal program like minicom on Linux or Teraterm on windows and configure it according to our UART settings:

Screenshot from 2014-03-25 11:25:38

We should see something similar for U-boot output:

Screenshot from 2014-03-25 11:26:02

I will post some short videos and more pictures shortly from my zybo booting into u-boot.  The next step is to build the Linux kernel with the Xenomai patches, and compile our device tree.  I should have this up in the next couple of days.  There were numerous manual merges I had to make when applying the xenomai patches for some reason.  I may split it into two steps.

As always leave questions or comments here and I will do my best to get answer them!

Xilinx SDK and Create the First Stage Bootloader for Zybo

Hey All!!

We’ve got our design ready and built in Vivado.  The next step is to export this hardware design to the SDK and create a first stage boot loader that will will combine with the .bit file and uboot to create get the board to boot into the uboot shell.

Verify that you were able to create the bit file without problems, so generate it again and look at the output of Vivado.  Once that was been successfully generated let’s export the design to the SDK.

Screenshot from 2014-03-17 09:42:35

Right click your in your design sources and select ‘export hardware for SDK’, select a folder where we can export the design to and a workspace.  Make sure to check to box that asks if you would like to launch the SDK.  Also make sure to include the bit file checkbox, in my picture it’s not selected but it should be by default if you generate the bit stream before you export your hardware.

Screenshot from 2014-03-17 09:53:03

Make sure you have the block diagram open or there will be an error when the SDK launches.  When this error message occurs we see some prints in the console out : ‘”export_hardware” works only for active block diagrams’, so make sure your block diagram is active before we export it to the SDK.

Screenshot from 2014-03-19 09:25:07

Once the export is complete we should see the SDK open with an XML file that describes our system.

Screenshot from 2014-03-19 09:26:26

Next let’s create the first stage bootloader we will need to boot our board.  Select File -> New ->Application Project and enter a project name for our boot loader, I used zybo_fsbl.  The hardware platform should be filled in with the information that was exported from Vivado and make sure that under Board Support Package we have ‘Create New’ selected. Click Next and we should see some templates to choose from.  Select Zynq FSBL as the template project and hit ‘Finish’.

Screenshot from 2014-03-19 09:42:07

Once we clock finish the our first stage bootloader project and bsp should compile and be ready to go.  From here we could do some bare metal examples of a simple C program running on Zybo, I will probably come back to this later but for now let’s get U-boot compiled and ready to go.

Step 3 – building U-boot and creating a boot image is next and should be ready to post in a couple days.  Any questions please leave a comment or email me.  If you have questions about building Linux or booting the board feel free to ask them and I’ll answer them as soon as I can.  Those steps I’m hoping will be up next week.

Helpful Links:

Hardware design for Zybo

Let’s start by getting Vivado installed.  I used Xilinx Vivado to do my hardware design, it was easy to use and has all the programs we need built in.  My development environment was a Macbook pro running a VM for windows and one for Linux.  I could have done the whole thing in Linux, but at the time of me bringing Xenomai up I didn’t have ISE or Vivado installed in my Linux environment. I like CentOS for my Linux distro, it’s has a very nice and clean interface and I find it very stable.  It runs great in VM fusion, I will leave the Linux install to the reader but message me if you need help.

If you go to the Xilinx download page here you can choose the installer package you need, I choose the installer for both Linux and Windows.  The download is so big might as well grab both in case you need to change development environments later.  As mentioned this is a huge download so either do this at night or maybe grab a couple of cups of coffee while you wait.  Once you’ve got Vivado install it and select the defaults.  Zybo was recognized by both Linux and Windows with no problems.  Once Vivado is installed let’s go get the files we need from Digilent.  Dowload the ZYBO Board Definition File for configuring the Zynq Processing System core in Xilinx Platform Studio and Vivado IP Integrator and ZYBO Master XDC File for Vivado designs these two files will be needed in Vivado to create the initial hardware design.  Next let’s start up Vivado and create a new project.

Screenshot from 2014-03-13 11:09:54

Name your project, and select a place to put it, once that is done select RTL project and click next.

Screenshot from 2014-03-13 11:10:51

Click next for the next two dialogues, when asked to add a constraints file stop and let’s add the one that we downloaded from Digilent.

Screenshot from 2014-03-13 11:11:26

Screenshot from 2014-03-13 11:15:13

Screenshot from 2014-03-13 11:18:54

This should tell Vivado about the hardware we are going to use.  Next we need to tell Vivado what chip we are using, if we look back at the Digilent website for the Zybo we can make a note of the following information:

The ZYBO offers the following on-board ports and peripherals:

  • ZYNQ XC7Z2010-1CLG400C
  • 512MB x32 DDR3 w/ 1050Mbps bandwidth
  • Dual-role (Source/Sink) HDMI port
  • 16-bits per pixel VGA output port
  • Trimode (1Gbit/100Mbit/10Mbit) Ethernet PHY
  • MicroSD slot (supports Linux file system)
  • OTG USB 2.0 PHY (supports host and device)

The top line is what we want and will help us identify the chip.  From the drop down menus select Zynq-7000 for Family, Zynq-7000 for sub family, clg400 for package, -1 for speed grade and C for temp grade.  You will have two choices left xc7z010clg-400-1 and xc7z020clg400-1, choose the fist one since your Zynq chip is the Xilinx Zynq-7000 (Z-7010) as mentioned on the Digilent website.  You also want to grab the hardware guide for Zybo, it will help in future posts if you are following along.

Screenshot from 2014-03-13 11:20:33

We are ready to confirm and create the project

Screenshot from 2014-03-13 11:23:34

So we should now have Vivado open with a new project like the picture below.

Screenshot from 2014-03-13 11:44:44

Now we are ready to create the block diagram and add some IP.

In the left side of the screen click on create block design.  I named my block design system but I don’t think the name really matters.

Screenshot from 2014-03-13 11:47:29

Now that we have a new block design, we can go ahead and add some IP to it.  Click Add IP on the green highlight that appeared in the diagram window. Scroll down and select Zynq7 Processing System.

Screenshot from 2014-03-13 11:48:47

Press Enter and you should now see a Zynq processor on your block design.

Screenshot from 2014-03-13 11:49:14

So far so good, let’s double-click the Zynq block and customize our IP to the Zybo.

Screenshot from 2014-03-13 11:49:43

Now let’s import the XPS settings that we downloaded from the Digilent site that will describe our hardware, click the import XPS settings button

Screenshot from 2014-03-13 11:50:06

Select the .xml file that we downloaded from Digilent, click OK.  Now click OK in the import XPS settings window.

Screenshot from 2014-03-13 11:50:21

So we now see some check marks beside some peripherals.  Let’s take a second and look at the clock configuration.  Click the clock configuration in the left side of the window, you should see something like the picture below.

Screenshot from 2014-03-13 11:51:16

Make a note of the input frequency.  This is DIFFERENT from both Zedboard and MicroZed and can cause some really frustrating problems when trying to add correct features to the device tree when we boot Linux.  I’ll explain what I ran into when we go over how to get Xenomai/Linux to boot. Click Ok and the customization screen should close and our processor should now have some inputs and outputs.

Screenshot from 2014-03-13 11:51:59

Connect the FCLK_CLK0 to the M_AXI_GP0_ACLK, once we scroll over the input a pencil appears and then connect each input similar to Labview if anyone has used that before.

Screenshot from 2014-03-13 11:54:10

This pretty much just feeds a clock to the FPGA and is the most basic FPGA design we can do.  I’m not a FPGA expert and plan to use the Zybo to further my learning when it comes to FPGA design.  I believe this pretty much brings the FPGA up and nothing else, so nothing on the FPGA is being used.  Let’s validate our design, before we start to create the HDL wrappers and bit file. Run the Block Automation as suggested by the green highlight.

Screenshot from 2014-03-13 11:58:23

Click the sources tab on the block design and right-click the file and select Create HDL Wrapper.

Screenshot from 2014-03-13 12:04:55

Once that is complete we should see some Verilog or VHDL files.  Now we can go ahead and generate the bitstream file, this should be on the left side of the screen near the bottom.

Screenshot from 2014-03-13 12:09:49

Once we are done, we can open the implemented design.

Screenshot from 2014-03-13 12:12:37

We are pretty much done!  The next step is to export our design to the Xilinx SDK to create the first stage bootloader.  This will be the subject of my next post.  Remember to save your project since we’ll need it in my next post.

If anyone runs into problems let me know I may have a step or two out-of-order, but I was able to create the bit file again following these steps.  Questions and comments are always welcome.

Helpful links:


Getting Started with Zybo

Over the next couple of weeks I will post steps on how to get your Zybo ready to run Xenomai.  Here are the steps that are needed to get your Zybo ready to run Xenomai (or regular Linux).

  • Create a design in Vivado for Zybo
  • Create the First Stage Bootloader
  • Build u-boot
  • Create the boot image
  • Checkout and patch Xilinx Linux with Xenomai  (this stage got tricky)
  • Compile the Linux Kernel with Xenomai support
  • Compile the Xenomai user space support
  • Create a rootfs
  • Prepare the SDCard
  • Run the Xeno latency test

I probably forgot a step so that list might change.  It sounds like a lot but it’s pretty straight forward once you uncover some of the tricks.  I found that the hardware guide for Zybo is your friend when trying to figure out why things for the Zedboard won’t work right out of the box.  I will start the first post in a few minutes and hopefully have it up in a couple hours and I’m aiming to have all the steps done very soon.

Thanks to all who read this, leave comments or questions




Xenomai on Zybo

Hi All,

This is my first blog and blog post for that matter.  This blog will be dedicated to embedded system and maybe some other fun software things.  For now I’m putting the finishing touches on my first real post which will be an introduction on how to get a Zybo Zynq board from Digilent up and running from scratch with first Linux and then with Xenomai Linux.  I should have part one ready in two weeks.