This document provides the guides showing how to use extra features, customize configurations in this Software Stack.

Extra Features


The FVP is configured by default to use “user mode networking”, which simulates an IP router and DHCP server to avoid additional host dependencies and networking configuration. Outbound connections work automatically, e.g. by running:

wget www.arm.com

Inbound connections require an explicit port mapping from the host. By default, port 8022 on the host is mapped to port 22 on the FVP, so that the following command will connect to an ssh server running on the FVP:

ssh [email protected] -p 8022

To map other ports from host, add the parameter containing the port mapping in the command as below:

kas shell -k \
    v8r64/meta-armv8r64-extras/kas/virtualization.yml \
    -c "../layers/meta-arm/scripts/runfvp \
        --verbose --console -- --parameter \


User mode networking does not support ICMP, so ping will not work.

More details on this topic can be found at User mode networking 1.

File Sharing between Host and FVP

It is possible to share a directory between the host machine and the FVP using the virtio P9 device component included in the kernel. To do so, create a directory to be mounted from the host machine:

mkdir /path/to/host-mount-dir

Then, add the following parameter containing the path to the directory when launching the model:

--parameter 'bp.virtiop9device.root_path=/path/to/host-mount-dir'

e.g. for the virtualization build:

kas shell -k \
    v8r64/meta-armv8r64-extras/kas/virtualization.yml \
    -c "../layers/meta-arm/scripts/runfvp \
        --verbose --console -- --parameter \

Once you are logged into the FVP, the host directory can be mounted in a directory on the model using the following command:

mount -t 9p -o trans=virtio,version=9p2000.L FM /path/to/fvp-mount-dir

Customize Configuration

Customizing the Zephyr Configuration

The Zephyr repository contains two relevant board definitions, fvp_baser_aemv8r and fvp_baser_aemv8r_smp, each of which provides a base defconfig and device tree for the build. In the Yocto build, the board definition is selected dynamically based on the number of CPUs required by the application recipe.

The defconfig can be extended by adding one or more .conf files to SRC_URI (which are passed to the OVERLAY_CONFIG Zephyr configuration flag).


The file extension for Zephyr config overlays (.conf) is different to the extension used by config fragments in other recipes (.cfg), despite the similar functionality.

The device tree can be modified by adding one or more .overlay files to SRC_URI (which are passed to the DTC_OVERLAY_FILE Zephyr configuration flag). These overlays can modify, add or remove nodes in the board’s device tree.

For an example of these overlay files and how to apply them to the build, see the modifications to run Zephyr applications on Xen in directory meta-armv8r64-extras/dynamic-layers/virtualization-layer/recipes-kernel/zephyr-kernel/

The link Important Build System Variables 2 provides more information.


Zephyr’s device tree overlays have a different syntax from U-Boot’s device tree overlays.

Customizing the Xen Domains

The default configuration contains two pre-configured domains: XEN_DOM0LESS_DOM_LINUX and XEN_DOM0LESS_DOM_ZEPHYR, with the BitBake varflags set appropriately. These varflags define where to find the domain binaries in the build configuration, where to load them in memory at runtime and how to boot the domain. Note that no attempts are made to validate overlapping memory regions, or even whether the defined addresses fit in the FVP RAM. For information, see the notes in the file meta-armv8r64-extras/classes/xen_dom0less_config.bbclass.

Currently, the default images that these two pre-configured domains run on are: one Linux rootfs image (core-image-minimal) for XEN_DOM0LESS_DOM_LINUX and one Zephyr application (zephyr-rpmsg-demo) for XEN_DOM0LESS_DOM_ZEPHYR.

The default Linux rootfs image can be configured using the variable XEN_DOM0LESS_LINUX_IMAGE and the default Zephyr application can be configured using the variable XEN_DOM0LESS_ZEPHYR_APPLICATION. For example, to use core-image-base and zephyr-helloworld instead of the defaults, run:

XEN_DOM0LESS_LINUX_IMAGE="core-image-base" \
    kas build v8r64/meta-armv8r64-extras/kas/virtualization.yml

Customize Parameter

Customizing the FVP Parameters

FVPs can be configured some aspects of their behavior through command-line parameters. When starting FVP using the runfvp script, the parameters can be customized by putting them after the separator --. For example, in the following command, port mapping is customized by the additional FVP parameter bp.virtio_net.hostbridge.userNetPorts, which will override the default value of the same parameter defined in meta-armv8r64-extras/classes/xen_dom0less_image.bbclass.

kas shell -k \
    v8r64/meta-armv8r64-extras/kas/virtualization.yml \
    -c "../layers/meta-arm/scripts/runfvp \
        --verbose --console -- --parameter \

The default parameters for the Software Stack to run FVP are defined in the following files:

Among them, fvp-baser-aemv8r64.conf defines the base FVP parameters of the Software Stack, and is also the default FVP parameter of the Baremetal Linux stack. Based on the above default parameters, meta-armv8r64-extras/classes/zephyr-fvpboot.bbclass customizes the FVP parameters for Baremetal Zephyr stack, and meta-armv8r64-extras/classes/xen_dom0less_image.bbclass customizes the FVP parameters for the Virtualization stack.

You can run the command below to check the description and optional values of the FVP parameters. For more details about FVP and its supported parameters, see Fast Models FVP Reference Guide.

kas shell -k v8r64/meta-armv8r64-extras/kas/virtualization.yml \
    -c "../layers/meta-arm/scripts/runfvp -- --list-params"


The default parameters are elaborately tuned, and changing parameters you don’t understand may lead to unpredictable results.

Customizing Build Environment Parameters

The Software Stack can be configured some aspects of its target image and runtime behavior at build time via command-line environment variables. The build environment variables and parameters supported by the Software Stack and their scope of application are detailed below.


    Controls if testimage runs automatically after an image build.


    • 0 (default)

      Enable to run test suite automatically after an image build

    • 1

      Disable to run test suite automatically after an image build


    • Baremetal Zephyr

    • Baremetal Linux

    • Virtualization

    See the section Validation for more details.


    Specifies the Zephyr application recipe to be used.


    • zephyr-rpmsg-demo (default)

    • zephyr-helloworld

    • zephyr-synchronization

    • zephyr-philosophers


    • Baremetal Zephyr

    • Virtualization

    There is one exception is that zephyr-rpmsg-demo only works on the Virtualization stack. See the Zephyr Sample Applications section for more information.


    Specifies which domain(s) will be enabled on Xen.






    • Virtualization

    See the section Customizing the Xen Domains for more details.


    Configures the Linux rootfs image.


    • core-image-minimal (default)


    • Virtualization

    The Images section of the Yocto Manual explains details about Linux rootfs images. It also lists more images that may be used as XEN_DOM0LESS_LINUX_IMAGE but are not officially supported by this Software Stack.


    Controls if the demo application runs automatically after Linux boot.


    • 0

      Disable to run Demo Applications automatically in the Linux domain after system boot

    • 1 (default)

      Enable to run Demo Applications automatically in the Linux domain after system boot


    • Virtualization

An example to build zephyr-synchronization and run testimage to validate after image build on Baremetal Zephyr stack, use the below command:

XEN_DOM0LESS_ZEPHYR_APPLICATION="zephyr-synchronization" \
    kas build v8r64/meta-armv8r64-extras/kas/baremetal-zephyr.yml

Another example of combining parameters, to run only Linux domain on Xen using core-image-base as rootfs, and disable testimage at build time and not automatically run demo application at run-time:

XEN_DOM0LESS_LINUX_IMAGE="core-image-base" \
    kas build v8r64/meta-armv8r64-extras/kas/virtualization.yml