One step that we are going to need to do before we build Xeomai 3.0 for Zynq is to build mainline Linux for Zynq. Why are we doing this??? There’s a blog post talking about using the Xilinx tree???
When it comes to embedded Linux we have some choice when it comes to what Linux tree we want to use. For most arm based board we will have the choice to build from mainline or from the vendor’s tree. The main different between mainline and a vendor tree is usually support for a specific chip or SOC. In our case the Xilins tree will contain support for Xilins SOCs before mainline. The Xilinx tree will have support for things that mainline has deemed to be too specific or not for general use. If we look at this link, we can see a list of Linux drivers for Xilinx SOCs and if they have support in mainline or in the xilinx tree. If we want to program the PL using Linux we need the devcfg driver which is not in mainline. We have options on how to bring this functionality to mainline but that will be saved for another blog post.
Back to the mainline kernel, so let’s use the stable kernel tree. These instructions apply to all the mainline kernel trees but for our purpose we will use the stable tree. Let’s go and clone a copy of that tree:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
If we use the following command
git tag -l | less
We should be able to see all the different tags in the kernel tree, what we want to do here is choose a kernel version that will work for our purpose , which is building Xenomai/Linux. For the Xenomai build we are going to to use the 4.1.18 kernel. I’ll explain why we choose this specific version when we build our Xeonmai patched kernel but for now let’s just go with it. To do this we need to execute the following command:
git checkout tags/v4.1.18 -b <name of your branch>
This will create a new local branch for use to use that is based on the 4.1.18 kernel version.
Ok, so we have our source ready, we still need to do two things before we can begin. The first is get a toolchain. In the previous posts we’ve been using the Xilinx toolchain that came with the SDK, but for the future we’ll switch to using the Linaro toolchain. I did hear that Xilinx has switched to using the Linaro toolchain for the new versions of the SDK. Let’s go ahead and grab gcc 5.4.1. If you follow the link we should see the Linaro release page for the 5.4.1 arm-linux-gnueabihf toolchain. Download the correct version for your development environment. I used the follow file for my setup (Ubuntu Mate)
Let’s go ahead and unzip that file:
tar xf gcc-linaro-5.4.1-2017.01-x86_64_arm-linux-gnueabihf.tar.xz
That should have created a folder called gcc-linaro-5.4.1-2017.01-x86_64_arm-linux-gnueabihf, let’s move that somewhere where every user can access it.
sudo mv ./gcc-linaro-5.4.1-2017.01-x86_64_arm-linux-gnueabihf /opt/
So once that’s in our /opt directory we can added it our PATH so we can use it easily from the command line. I usually add this to my bashrc file so I don’t have to type it in everytime I open a terminal.
PATH=/opt/gcc-linaro-5.4.1-2017.01-x86_64_arm-linux-gnueabihf/bin.:$PATH export $PATH
So now we should be able to type arm-linux-gnueabihf-gcc –version and see that our version is 5.4.1 and our new toolchain is ready to use.
In mainline Linux there is no defconfig for Zynq, it’s actually part of the generic ARM-v7 build. This really isn’t what we want because we don’t really want to build in all the platforms that the generic ARM-v7 defconfig covers. So we are going to borrow a file from the Xilinx tree.
I’ve created this github repo that will contain any support files and eventually binaries that will allow people to use Xenomai 3.0 on zynq with out all the fun we had here but for now it’s just the zynq defconfig.
This file is just the defconfig from the Xilinx tree, we are going to use it to build our mainline kernel. Let’s copy it over to our linux tree.
cp ./xeno_zynq/xilinx_zynq_defconfig <path_to_kernel_tree>/arch/arm/configs/xilinx_zynq_defconfig
Now that we have all of our files in place, our toolchain ready to compile, let’s go ahead and build our kernel. I use an output directory for kernel builds, I find it helps keep things organised and object files in one place. If you are going to use the same source tree to build multiple platforms then using an output directory is very helpful. Change directory into the linux source directory and execute the following command.
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=<path_to_output_directory> xilinx_zynq_defconfig
Some things to note here, the ARCH and CROSS_COMPILE flags, we need to tell the make system we are going to be targeting an ARM chip and we also need to tell it what prefix to use when calling the cross compiler. The O flag here is telling gcc where to put all the output files. We should see the following output:
ggallagher@ggallagher-virtual-machine ~/devel/emb_linux/linux-stable (zynq_xeno_4.1.18) $ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build/ xilinx_zynq_defconfig make: Entering directory '/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build' GEN ./Makefile # # configuration written to .config # make: Leaving directory '/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build'
The above came from my machine so it won’t look exactly like the above but you should have something similar. For Zynq we are going to use the exact same command to build the kernel that we used when creating the xilinx tree kernel.
You can do any customisation you need by executing:
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=<path_to_output_directory> menuconfig
This should bring you to menu config which allows you to make any customisation you would like to do using an interactive window. It’s pretty straight forward just make sure you have ncurses installed or it won’t load.
Once you are finished with any customisation, we are ready to actually build the kernel.
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- O=/home/ggallagher/devel/emb_linux/linux_xeno_zynq_build/ UIMAGE_LOADADDR=0x8000 uImage modules dtbs
This build will take a while to complete, but once it’s done our image should be located under arch/arm/boot/zImage in either the kernel source tree or in our output directory depending on how you invoked the build. If you built it like I outlined then you’ll find it in <output directory>/arch/arm/boot/zImage. We can copy that file to our sdcard and we should see our Zybo boot up.
There you have it, that’s how to build mainline Linux for Zynq. Next it’s on to patching our kernel with Xenomai 3 and we should have the start of our Realtime Linux distribution.
2 thoughts on “Building Mainline Linux for Zynq”
Thank you for your great post!
I am following your post to build linux kernel for Zynq(Z-7020). After I copy zImage to SD card BOOT partition and use minicom to connect to board, nothing displayed in console.
I am sure that the Zynq is booting from TF card, and the SD card has 2 partitions(one is BOOT partition, FAT32, and the other is ext4).
I wonder if you could give me some advice?