Author: Siro Mugabi

Category: development setups


This is a step-by-step guide on setting up and testing a diskless server/client environment. It is based on a complete QEMU virtual machine setup (server and diskless clients) running on an Ubuntu (12.04 or 13.04) host.

Tags: qemu linux development setups pxelinux grub nfs efi

Update Required.


Server Network Interface Setup

Two NICs required and may be configured as follows:

  • eth0 connects to an external network e.g. Internet
  • eth1 connects to the development LAN

Server LAN Services

There are four major services to configure:

  • TFTP Services
  • DHCP Services
  • Netboot Bootloaders (PXELINUX, GRUB, etc)
  • Network Filesystem Services (e.g. NFS)

Optional QEMU VM based Infrastructure Setup

All the server configuration instructions included in this guide can be performed from within a virtual environment. This is convineint for preliminary tests or for an instructional/lab session. In fact:

  • since my venerable main server (a 2009 model AMD Phenom II x4, 4GB) which has since then faithfully provided PXE,TFTP,DHCP and NFS to the rest of my development LAN for 3yrs now still runs Ubuntu 10.04 and,
  • since these instructions are Ubuntu 12.04 and 13.04 based

the entire guide was developed against a complete QEMU virtual LAN setup (VM server and client nodes) on a host (an aging i3-2350M quad core laptop) running Ubuntu 12.04. See A QEMU VLAN setup for a tutorial on setting up a complete virtual LAN made entirely of QEMU VM nodes.

See A QEMU TAP networking setup for the VM host network interface configuration used in this guide. The following additional configuration on the host will be required to establish a private VLAN between the QEMU VM server and its QEMU VM client nodes:

$ sudo brctl addbr br1
$ sudo ifconfig br1 up

And to allow traffic through this private VLAN (at least with Ubuntu 10.04, 12.04, 13.04), run the following command on the host to disable ethernet filtering:

$ for i in `ls /proc/sys/net/bridge/bridge-nf-*` ; do sudo sh -c "echo 0 > $i" ; done

Prepare QEMU Disk Images

Only the VM server will require a QEMU disk image. A disk image containing a fresh Ubuntu installation is used here. It may be used directly or, alternatively, a derived image may be prepared from this base image:

$ ls
prestine-ubuntu-12.04-desktop-amd64.qcow2 # or, an Ubuntu 13.04 image

$ qemu-img create -f qcow2 -o backing_file=prestine-ubuntu-12.04-desktop-amd64.qcow2 vm_server-img.qcow2
Formatting 'vm_server-img.qcow2', fmt=qcow2 size=10737418240 backing_file='prestine-ubuntu-12.04-desktop-amd64.qcow2' encryption=off cluster_size=65536

$ ls
prestine-ubuntu-12.04-desktop-amd64.qcow2 vm_server-img.qcow2

Refer to this link for information on installing Linux distro's on QEMU disk images and Manipulating disk images with qemu-img for further info on using QEMU disk Images.

Instantiate the QEMU VM Server Node

NOTE: Root priviledges are required in order to bring up the TAP interfaces. There exist work arounds such as this one, but that subject won't be covered here. You may also consider using the virtio-net NIC model for performance reasons (see QEMU Virtio).

$ sudo qemu-system-x86_64 -enable-kvm -smp 2 -m 512 \
  -drive file=vm_server-img.qcow2,cache=writeback \
-net nic,vlan=0,macaddr=DE:AD:BE:EF:00:11 \
  -net tap,vlan=0,ifname=tap0,, \
-net nic,vlan=2,macaddr=DE:AD:BE:EF:51:12 \
  -net tap,vlan=2,ifname=tap1,, \
[ -vga std -usbdevice tablet -daemonize ]

See link for the definitions of the qemu_br*_if*.sh scripts.

Extra Utilities

Network traffic monitoring tools such as tcpdump(8) and tshark(1) (console) or wireshark(1) (GUI) greatly facilitate debugging:

$ sudo apt-get install tcpdump
$ sudo apt-get install wireshark # also installs tshark

Basic usage examples:

$ sudo tcpdump -i br0 -n -t
$ sudo tshark -i br0

Installing sshd(8) will be "helpful" especially if working with the QEMU VM server.

$ sudo apt-get install openssh-server

For vi(1) users such as myself, installing an improved version of the editor may also be desirable:

$ sudo apt-get install vim

Setting up TFTP Services


$ sudo apt-get install tftpd-hpa tftp-hda

Testing and Troubleshooting

$ ls /var/lib/tftpboot/

$ ls -l /var/lib/  | grep tftpboot
drwxr-xr-x 2 root          nogroup   tftpboot

$ sudo chown nobody:nogroup /var/lib/tftpboot

$ sudo chmod 1777 /var/lib/tftpboot

$ ls -l /var/lib/ | grep tftpboot
drwxrwxrwt 2 nobody        nogroup   tftpboot

$ sudo netstat -aup | grep tftpd
udp        0      0 *:tftp        *:*        3385/in.tftpd

$ touch /var/lib/tftpboot/wisdom

$ tftp localhost -c get wisdom

$ ls | grep wisdom

$ rm wisdom /var/lib/tftpboot/wisdom

In addition to using the network traffic monitoring tools mentioned in Section link, also inspect the TFTP server logs in /var/log/syslog e.g:

$ cat /var/log/syslog | grep tftpd
in.tftpd[4704]: tftp: client does not accept options
in.tftpd[4720]: tftp: client does not accept options
in.tftpd[4722]: tftp: client does not accept options

As a safe bet, always restart the TFTP server:

$ sudo service tftpd-hpa resart

whenever changes are made to the server's network interface configuration settings (see Section link).



$ sudo apt-get install isc-dhcp-server

DHCP Configuration

Edit the configuration file. For example:

$ cat /etc/dhcp/dhcpd.conf

ddns-update-style none;
default-lease-time 600;
max-lease-time 7200;
log-facility local7;
allow booting;
allow bootp;

# Developmet LAN
subnet netmask {
    range dynamic-bootp; 
    option broadcast-address;
    option routers;
    option domain-name-servers;

    class "netboot-clients"{
        match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";

        if substring (option vendor-class-identifier, 15, 5) = "00007" {
            filename "elilo-3.16-x86_64.efi";
        } else if substring (option vendor-class-identifier, 15, 5) = "00006" {
            filename "elilo-3.16-ia32.efi";
        } else if substring (option vendor-class-identifier, 15, 5) = "00000" {
            filename "boot/grub/i386-pc/core.0";

host sungura {
    hardware ethernet DE:AD:BE:EF:B1:AB;
    filename "pxelinux.0";
    option root-path ",v3,hard,rw";

host mjanja {
    hardware ethernet DE:AD:BE:EF:31:CA;
    option root-path ",v3,hard,rw";

This configuration largely employs the DHCP Vendor Class Identifier (VCI) mechanism to select the filename of the bootloader to pass to a requesting client. In brief, clients attempting a netboot send a vender class identifier in the format PXEClient:Arch:00000:UNDI:002001 (BIOS/PXE) or PXEClient:Arch:0000{6|7}.UNDI.003001 (UEFI). The field after Arch is used here to distinguish between the type of netboot clients. The meanings of the values used are described in the following table:

VCI Architecture Firmware
00000 ia32 BIOS/PXE
00006 ia32 UEFI
00007 x86_64 UEFI

Also note that with this configuration:

  • clients sungura and mjanja will get assigned a fixed IP address based on their MAC addresses. Any other client will recieve a DHCP assigned IP address.

  • sungura will netboot via PXELINUX. The bootloader that mjanja, and any other netboot client for that matter, will use will depend on
    its machine architecture (ia32 vs. x86_64) and boot firmware (BIOS/PXE vs. UEFI).

  • The root-path option is used to specify the location diskless client rootfs NFS exports along with the mount options. Use of this option will be discussed in more detail in Section link.

Refer to dhcpd(8) for the DHCP server commandline, dhcpd.conf(5) for its configuration file man page, dhcp-eval(5) for ISC DHCP conditional evaluation and dhcp-options(5) for options sent to a DHCP client.

Finally, this configuration assumes that the server will also provide DNS services to its clients i.e. option domain-name-servers Change this setting accordingly or refer to tutorials such as this one for instructions on setting up DNS in Ubuntu.

Network Interfaces

Linux bridge utilities will be required in the setup:

$ sudo apt-get install bridge-utils

Now, edit the server's network interfaces file. NOTE: Ensure that the network address settings of the development LAN1 on br0 and external network on eth0 do not conflict. In the present scenario, the external network address is and the development LAN address is

$ cat /etc/network/interfaces
# order of interface activation
auto lo eth0 eth1 br0

iface lo inet loopback

iface eth0 inet dhcp

iface eth1 inet manual

iface br0 inet static
    bridge_ports eth1
    bridge_maxwait 0

$ sudo invoke-rc.d networking stop # or, "service networking stop"
$ sudo invoke-rc.d networking start # or, "service networking start"
$ ifconfig
br0       Link encap:Ethernet  HWaddr de:ad:be:ef:51:12  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::dcad:beff:feef:5112/64 Scope:Link

eth0      Link encap:Ethernet  HWaddr de:ad:be:ef:c3:24  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::dcad:beff:feef:11/64 Scope:Link

eth1      Link encap:Ethernet  HWaddr de:ad:be:ef:51:12  

lo        Link encap:Local Loopback  
          inet addr:  Mask:

The notable setting in /etc/network/interfaces is the bridge_ports eth1 option. Adding eth1 to br0 allows TAP/bridge networking of local QEMU VM client instances on the (physical) server machine as well as providing a bridge to external client machines (and remote QEMU VM client instances in those machines). Refer to brctl(8) for information on configuring a Linux bridge and bridge-utils-interfaces(5) for a detailed explaination of the bridge_* options.

Now that Linux bridge br0 is configured, instantiate the DHCP server to start listening on the interface:

$ sudo service isc-dhcp-server start

Testing and Troubleshooting

The following dry-run will report any errors in the configuration file:

$ sudo dhcpd -t
Internet Systems Consortium DHCP Server 4.1-ESV-R4
Copyright 2004-2011 Internet Systems Consortium.
All rights reserved.
For info, please visit

To check whether dhcpd is actually running and listening on (an) interface(s):

$ sudo netstat -aup | grep dhcpd
 udp    0       0       *:bootps        *:*     7256/dhcpd

In addition to using the network traffic monitoring tools mentioned in Section link, also inspect the DHCP server logs in /var/log/syslog e.g:

$ cat /var/log/syslog | grep dhcpd
dhcpd: DHCPDISCOVER from de:ad:be:ef:b1:ab via br0
dhcpd: DHCPOFFER on to de:ad:be:ef:b1:ab via br0
dhcpd: DHCPREQUEST for ( from de:ad:be:ef:b1:ab via br0
dhcpd: DHCPACK on to de:ad:be:ef:b1:ab via br0
dhcpd: DHCPDISCOVER from de:ad:be:ef:31:ca via br0
dhcpd: DHCPOFFER on to de:ad:be:ef:31:ca via br0
dhcpd: DHCPREQUEST for ( from de:ad:be:ef:31:ca via br0
dhcpd: DHCPACK on to de:ad:be:ef:31:ca via br0
dhcpd: DHCPDISCOVER from de:ad:be:ef:31:ca via br0

As a general rule, always restart the DHCP server whenever changes are made to the server's network interface configuration settings.

Setting Up Netbootloaders

There exist a number of netbootloaders with varying degrees of capabilities e.g. link. This section will only present a few of the many possible netboot configurations.


PXELINUX is a SYSLINUX derivative for booting Linux from a network server using a network ROM conforming to the Intel PXE (Pre-Execution Environment) specification.

Obtaining the PXELINUX bootloader

A fresh Ubuntu installation already contains the pxelinux.0 bootloader in its SYSLINUX package:

$ syslinux --version
syslinux 4.05  Copyright 1994-2011 H. Peter Anvin et al

$ ls /usr/lib/syslinux | grep pxelinux.0

Alternatively, download a tarball from here.

Install the bootloader

$ cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftpboot/
$ sudo chown nobody:nogroup /var/lib/tftpboot/pxelinux.0
$ sudo chmod 1777 /var/lib/tftpboot/pxelinux.0 
$ ls -l /var/lib/tftpboot/pxelinux.0 
-rwxrwxrwt 1 nobody nogroup  /var/lib/tftpboot/pxelinux.0

Perform an initial test run

PXELINUX Configuration File

Edit the bootloader's configuration file. Consult this link for details on writing PXELINUX configuration files:

$ mkdir /var/lib/tftpboot/pxelinux.cfg
$ cat /var/lib/tftpboot/pxelinux.cfg/default

DISPLAY pxelinux.cfg/default
F1 pxelinux.cfg/F1.txt


kernel vmlinuz
append initrd=initrd.img console=tty0 console=ttyS0

$ cat /var/lib/tftpboot/pxelinux.cfg/F1.txt

Available Kernel Images:

Default: 0

0) PXELINUX boot test

The /var/lib/tftpboot/pxelinux.cfg/F1.txt file is not a requirement. It is simply included here for an illustration on creating simple text-based menus for the PXELINUX bootloader interface. The Linux console= boot option is also thrown in there for debugging purposes - it is not essential.

Install kernel images

Since the primary objective here is to verify bootloader functionality, the readily available distro kernel images can be used directly.

$ cd /var/lib/tftpboot/
$ cp /boot/vmlinuz-`uname -r` . 
$ cp /boot/initrd.img-`uname -r` .
$ ln -s vmlinuz-`uname -r` vmlinuz
$ ln -s initrd.img-`uname -r` initrd.img

Finally, set appropriate permissions for the TFTP server's root directory (particularly for the kernel images):

$ sudo chown nobody:nogroup -R /var/lib/tftpboot/
$ sudo chmod 1777 -R /var/lib/tftpboot/

otherwise pxelinux.0 may compain with the following (rather misleading/confusing) error message:

Test Boot

Recall from the DHCP proxy configuration in Section link, that the client with the arbitrary MAC address DE:AD:BE:EF:B1:AB was set to boot via PXELINUX. (Your MAC address settings will certainly vary if booting up a physical machine):

$ sudo qemu-system-x86_64 -smp 2 -enable-kvm -m 512 -boot n \
 -net nic,vlan=2,macaddr=DE:AD:BE:EF:B1:AB \
 -net tap,vlan=2,ifname=tap2,, -serial stdio

Below are screen shots of a boot instance. In this scenario both the server and client machines were QEMU VM instances:

Since ordinary kernel images (i.e. not configured a for network filesystem mount) were used - overlooking the fact that NFS services are yet to be configured on the server - the PXELINUX boot process stops at the initramfs prompt.


Obtain GRUB

The following GRUB download and build was performed rather than installing and using the Ubuntu distro package.

$ git clone git://
$ mkdir {temp,install}
$ SOURCE=${PWD}/grub
$ TEMP=${PWD}/temp
$ INSTALL=${PWD}/install
$ cd $SOURCE
$ sudo apt-get install bison flex autoconf automake autotools-dev libtool gettext libdevmapper-dev
$ ./ 
$ cd $TEMP
$ ${SOURCE}/configure --prefix=${INSTALL}/usr
GRUB2 will be compiled with following components:
Platform: i386-pc
With devmapper support: Yes
With memory debugging: No
With disk cache statistics: No
With boot time statistics: No
efiemu runtime: Yes
grub-mkfont: No (need freetype2 library)
grub-mount: No (need FUSE library)
starfield theme: No (No build-time grub-mkfont)
With libzfs support: No (need zfs library)
Build-time grub-mkfont: No (need freetype2 library)
Without unifont (no build-time grub-mkfont)
Without liblzma (no support for XZ-compressed mips images) (need lzma library)
$ make [-jN]
$ make install

Install Bootloader Files

$ ls
grub  install  temp
$ ${INSTALL}/usr/bin/grub-mknetdir --net-directory=./

The following harmless warning was emitted upon running grub-mknetdir:

${ISNTALL}/usr/bin/grub-mknetdir: warning: cannot open directory `${INSTALL}/usr/share/locale': No such file or directory.
Netboot directory for i386-pc created. Configure your DHCP server to point to ./boot/grub/i386-pc/core.0

Proceeding with the installation process:

$ ls
boot  grub  install  temp

$ cp -a boot /var/lib/tftpboot/

Perform an initial test run

GRUB Configuration File

Determine the location where to place the bootloader's configuration file:

$ cat /var/lib/tftpboot/boot/grub/i386-pc/grub.cfg 
source /boot/grub/grub.cfg

Details on writing GRUB configuration files can be found here. Here, grub.cfg is edited such that:

$ cat /var/lib/tftpboot/boot/grub/grub.cfg

set root=(tftp,
menuentry "GRUB PXE Boot Test" {
  linux /vmlinuz console=tty0 console=ttyS0 
    initrd /initrd.img

NOTE: The set root=(tftp, line is optional. GRUB PXE will automagically figure this out. In fact, setting wrong values will result in GRUB PXE being unable to locate the kernel images and fail with:

Kernel images, vmlinuz and initrd.img, and their location remain the same as described in Section link.


Before attemping a boot test, set appropriate permissions (particularly for the kernel images):

$ sudo chown nobody:nogroup -R /var/lib/tftpboot/
$ sudo chmod 1777 -R /var/lib/tftpboot/

otherwise GRUB may complain with:

when attempting to retrieve kernel images from the TFTP server.

Test Boot

Just like the case study in Section link, a GRUB PXE boot test can also be done with a QEMU client. The main difference here in the QEMU commandline is the assignment of the client's MAC address i.e. it may take any value other than DE:AD:BE:EF:B1:AB (see Section link). For illustration purposes, the value DE:AD:BE:EF:31:CA is used so that the client will get assigned the fixed IP address Also recall that the DHCP pxoxy settings are such that any BIOS/PXE client with an Ethernet address other than DE:AD:BE:EF:B1:AB uses the GRUB PXE bootloader.

Now, since both the client and server were QEMU VM instances in this particular case study, the following commandline was used:

$ sudo qemu-system-x86_64 -smp 2 -enable-kvm -m 512 -boot n \
-net nic,vlan=2,macaddr=DE:AD:BE:EF:31:CA \
  -net tap,vlan=2,ifname=tap3,, -serial stdio

which resulted in the following boot process:

which stops at the initramfs prompt for reasons explained in Section link.


Obtain ELILO

ELILO can be downloaded from here.

NOTE: According to this, although ELILO is still actively maintained (by Jason Fleischli), it is no longer in active development i.e. new releases will only contain major bug fixes but no new features. An alternative to ELILO is GRUB2 with EFI support (check out this interesting account).

$ tar xzf elilo-3.16-all.tar.gz
$ ls


According to the settings in the DHCP configuration file in Section link, the bootloader binaries are installed in the TFTP server root directory:

$ cp elilo-3.16-x86_64.efi /var/lib/tftpboot/
$ cp elilo-3.16-ia32.efi /var/lib/tftpboot/

Perform an initial test run

ELILO Configuration File

Refer to this link for details on writing ELILO configuration files. Below are sample configs for a test boot.

$ cat /var/lib/tftpboot/elilo-x86_64.conf

append="console=tty0 console=ttyS0"

$ cat /var/lib/tftpboot/elilo-ia32.conf 

append="console=tty0 console=ttyS0"

Kernel images, vmlinuz and initrd.img, and their location remain the same as described in Section link. Needless to mention that the vmlinuz-32 and initrd.img-32 should be 32-bit kernel images. Also note that the kernel image, vmlinuz* must have the necessary EFI support:


Setting permissions:

$ sudo chown nobody:nogroup -R /var/lib/tftpboot/
$ sudo chmod 1777 -R /var/lib/tftpboot/
Test boot

To perform a UEFI netboot with QEMU, the firmware provided by the Open Virtual Machine Firmware (OVMF) package is used in place of QEMU's native BIOS/iPXE. In the following example, the release is used and can be downloaded from here. In addition, a recent QEMU version, e.g. v1.6 or later, should be used (for the client node). Consult QEMU intro for details on building and installing QEMU from source.

$ unzip

$ QEMUSYSv1_6_x86_64=${QEMU_v1_6_INSTALL_PATH}/bin/qemu-system-x86_64

$ ${QEMUSYSv1_6_x86_64} --version
QEMU emulator version 1.6.0, Copyright (c) 2003-2008 Fabrice Bellard

$ sudo $QEMUSYSv1_6_x86_64 -smp 2 -enable-kvm -m 512 -boot n \
 -net nic,vlan=2,macaddr=DE:AD:BE:EF:31:CA \
 -net tap,vlan=2,ifname=tap2,, -serial stdio -L . -bios OVMF.fd

The -L and -bios options are used to locate and specify, respectively, the UEFI firmware. Note that the MAC address DE:AD:BE:EF:31:CA is specified to simulate a scenario where the client recieves a fixed IP address according to the DHCP proxy settings in Section link. Like the example in Section link, this client also gets assigned the IP address But unlike that case study, this client uses the UEFI protocol to retrieve the corresponding EFI bootloader courtesy of the DHCP Vendor Class Identifier mechanism.

Below are screenshots of QEMU booting with OVMF/ELILO.

NFS Services

Linux NFS allows a server to share directories and files with clients over a network. In the context of diskless clients, NFS allows the remote machines to mount their entire rootfs from a server. Typical embedded development setups follow this approach.


sudo apt-get install nfs-kernel-server rpcbind


NFS Exports

NFS server export table

Assuming that the following rootfs installations already exist in the server:

$ ls /opt/nfsroot_sungura/
bin   dev  home  linuxrc     media  opt   root  sbin  tmp  var
boot  etc  lib   lost+found  mnt    proc  run   sys   usr

$ ls /opt/nfsroot_mjanja/
bin   dev  home  linuxrc     media  opt   root  sbin  tmp  var
boot  etc  lib   lost+found  mnt    proc  run   sys   usr

The server's /etc/exports is configured as:

$ cat /etc/exports


For tighter control, the respective client IP address assignments (see Section link) may be used i.e. for sungura and for mjanja. See exports(5) for a detailed explanation of all the available options for /etc/exports. More information can be found here and there.

Restart the NFS server:

$ sudo service nfs-kernel-server restart

Check whether nfsd is up and listening:

$ sudo netstat -aup | grep nfs
udp         0       0 *:nfs         *:*         -               
udp6      0     0 [::]:nfs  [::]:*  -


$ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100024    1   udp  52888  status
    100024    1   tcp  41153  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100021    1   udp  55979  nlockmgr
    100021    3   udp  55979  nlockmgr
    100005    1   udp  44461  mountd
    100005    1   tcp  56464  mountd

Basically, nfs, portmapper and mountd must be displayed in the output for successful NFS operation. See rpcinfo(8).

To view all exports:

$ sudo exportfs -v


$ showmount -e
Export list for lumumba-virtual-machine:

Check out exportfs(8) and showmount(8).

Peform a local NFS mount test. Recent versions of Ubuntu e.g. 12.04 use NFSv4 by default. If the following NFS mount command hangs and reports:

$ mkdir nfsmnt

$ sudo mount -t nfs nfsmnt

$ tail /var/log/syslog
RPC: AUTH_GSS upcall timed out.
Please check user daemon is running.
NFS: nfs4_discover_server_trunking unhandled error -512. Exiting with error EIO

and if you don't know any better, try using NFSv3:

$ sudo mount -t nfs -o v3 nfsmnt

$ ls nfsmnt/
bin   dev  home  linuxrc     media  opt   root  sbin  tmp  var
boot  etc  lib   lost+found  mnt    proc  run   sys   usr

$ sudo umount nfsmnt

In fact, to simplify illustration, this is the approach is used in this guide. See nfs(5) for other available NFS mount options.

Also note that whenever /etc/exports gets modified, the following command should be executed:

$ sudo exportfs -ra

... or, if need be, re-start the NFS server.

Passing the NFS-root exports information to boot clients

There exists more than one way of passing NFS-root information to the diskless client kernel images during boot. Choice of a particular method will depend on the given scenario or will simply be a matter of preference. Specifying NFS-root information by way of configuration files allows for flexible support of multiple boot clients. Each client gets assigned a NFS-root mount according to certain criteria e.g. it's IP address or MAC address.

A few approaches of passing NFS-root information to diskless clients via configuration files are described below:

  • via the DHCP server root-path option

    Specifying the root-path option in the DHCP server configuration file is one way of passing NFS-root information to the diskless client kernel images during boot. This is the approach used in this guide (See Section link). One advantage of this method is that it allows for generic specification of the NFS-root regardless of the bootloader used i.e. different bootloaders require specific syntax for NFS-root specification in their configuration files.

  • via the bootloader's configuration file

    Different bootloaders have specific mechanisms to support multiple diskless clients. For example, the PXELINUX bootloader supports a certain naming scheme for its configuration files based on, e.g., a client's assigned IP address2. Each PXELINUX configuration file could then specify the respective NFS-root location and mount options.


The /etc/fstab files in the NFS exports were configured this way3:

$ cat /opt/nfsroot_${NAME}/etc/fstab
# /etc/fstab: static file system information.
# <file system> <mount pt>          <type>      <options>                       <dump>  <pass>
/dev/nfs        /                   nfs         rw,sync,rsize=8192,wsize=8192       0       1
proc            /proc               proc        defaults                            0       0
devpts          /dev/pts            devpts      defaults,gid=5,mode=620             0       0
tmpfs           /dev/shm            tmpfs       mode=0777                           0       0
tmpfs           /tmp                tmpfs       defaults                            0       0
sysfs           /sys                sysfs       defaults                            0       0

See fstab(5) for general configuration info on /etc/fstab, mount(8) for the generic mount options and nfs(5) for the NFS specific mount options.

Number of supported diskless clients

$ cat /etc/default/nfs-kernel-server 
# Number of servers to start up
# To disable nfsv4 on the server, specify '--no-nfs-version 4' here

In other words, if more than 8 diskless clients will be supported by the server, change the RPCNFSDCOUNT value accordingly (and restart the NFS server).

Kernel Images

Kernel NFS Support

NFSD support is required by the server machine's kernel. The Ubuntu distro kernel ships with the following NFS server support configured:

$ cat /boot/config-$(uname -r) | grep NFSD

The kernels for the diskless clients should, at least, enable the following NFS client support:


The Ubuntu distro kernel's ship with the above and the following additional NFS client support already configured:


Note that the configuration options listed above for the diskless client kernels require the presence of an NFS-root capable initramfs image. This is the recommended approach. However, it (may be) still possible to use a standalone vmlinuz image4. In that case:

  • the essential client options listed above will have to be built-in and,

  • the following options will also have to be enabled statically in the kernel to include NFS-root mount support and enable auto-configuration of devices and the routing table during boot based on information supplied by the DHCP server:

    CONFIG_ROOT_NFS=y  # made "visible" by CONFIG_IP_PNP

    and, in addition, support for the board's network device will have to be built-in.

Preparing an NFS-root capable Initramfs

Continuing with the distro kernel images used so far, an new NFS-root capable initramfs image is built to replace the standard distro image in the TFTP server root directory.


$ cp -a /etc/initramfs-tools initramfs-tools-nfs

$ vim initramfs-tools-nfs/initramfs.conf


$ mkinitramfs -d initramfs-tools-nfs -o initrd.img-`uname -r`-nfs `uname -r`

$ ls
initramfs-tools-nfs initrd.img-3.11.0-15-generic-nfs

Running the following variant of the mkinitramfs commandline keeps the temporary directory used to compile the initramfs image in the present working directory. This is convinient should the need to inspect the contents of the image arise5:

$ TMPDIR=$INITRDDIR mkinitramfs -d initramfs-tools-nfs -o initrd.img-`uname -r`-nfs `uname -r` -k
Working files in /home/lumumba/initrd_stuff/mkinitramfs_xR63DA and overlay in /home/lumumba/initrd_stuff/mkinitramfs-OL_10yVQu

$ ls

$ ls mkinitramfs_IV5wRt/
bin  conf  etc  init  lib  lib64  run  sbin  scripts

See mkinitramfs(8) for the man page.

Perform an initial test run

Install the NFS-root capable kernel images

Initial boot tests will be done using the NFS-root capable initramfs prepared in Section link.


$ cp initrd.img-3.11.0-15-generic-nfs /var/lib/tftpboot/

$ cd /var/lib/tftpboot/

$ ls -l
drwxrwxrwt 3 nobody  nogroup   boot
-rwxrwxrwt 1 nobody  nogroup   elilo-3.16-ia32.efi
-rwxrwxrwt 1 nobody  nogroup   elilo-3.16-x86_64.efi
-rwxrwxrwt 1 nobody  nogroup   elilo-ia32.conf
-rwxrwxrwt 1 nobody  nogroup   elilo-x86_64.conf
lrwxrwxrwx 1 nobody  nogroup   initrd.img -> initrd.img-3.11.0-15-generic
-rwxrwxrwt 1 nobody  nogroup   initrd.img-3.11.0-15-generic
-rw-r--r-- 1 lumumba lumumba   initrd.img-3.11.0-15-generic-nfs
-rwxrwxrwt 1 nobody  nogroup   pxelinux.0
drwxrwxrwt 2 nobody  nogroup   pxelinux.cfg
lrwxrwxrwx 1 nobody  nogroup   vmlinuz -> vmlinuz-3.11.0-15-generic
-rwxrwxrwt 1 nobody  nogroup   vmlinuz-3.11.0-15-generic

$ sudo rm initrd.img

$ ln -s initrd.img-3.11.0-15-generic-nfs initrd.img

$ sudo chown nobody:nogroup -R /var/lib/tftpboot/

$ sudo chmod 1777 -R /var/lib/tftpboot/

Test Boot

The exact same QEMU launch commandlines and corresponding bootloader configurations in Sections link and link and link, are then used here unchanged. They all yeild the following final result:

i.e. the boot process finally reaches the login prompt.

This last section concludes the guide. Hopefully, with only a few minor modifcations, the setup can be tuned for your own particular diskless client (embedded development) LAN requirements.



1. If running the QEMU based setup, this will be the private LAN between the VM instances [go back]

2. Refer to this link for PXELINUX configuration [go back]

3. These settings are not optimimal [go back]

4. But this is, seemingly, tricky or deprecated in Linux v3.x [go back]

5. Also see An initramfs tutorial [go back]