.. # Copyright (c) 2022-2023, Arm Limited. # # SPDX-License-Identifier: MIT .. _Applications: ############ Applications ############ The following diagram shows the architecture of the applications implemented in this Software Stack: .. image:: /images/armv8r64-demo-application-arch.png :alt: Arm v8r64 Demo Application Architecture .. _Inter-VM Communication: ********************** Inter-VM Communication ********************** As described in the above architecture diagram, Inter-VM communication implemented in this Software Stack is based on a shared memory mechanism to transfer data between VMs. It is implemented using the OpenAMP framework in the application layer. The Xen hypervisor provides a communication path using static shared memory and static event channel for data transfer. And the guest OSes (Linux and Zephyr) in the middle expose the static shared memory and the static event channel to upper layer applications. For the implementation of the static shared memory and the static event channel in Xen and 2 guest OSes, please refer to the section :ref:`Hypervisor (Xen)`, :ref:`Linux Kernel` and :ref:`Zephyr` in :ref:`Components`. RPMsg (Remote Processor Messaging) is a messaging protocol enabling communication between two VMs, which can be used by Linux as well as real-time OSes. RPMsg implementation in the `OpenAMP`_ library is based on virtio. This Software Stack introduces the `meta-openamp`_ layer to provide support for the OpenAMP library. .. _OpenAMP: https://github.com/OpenAMP/open-amp .. _meta-openamp: https://github.com/OpenAMP/meta-openamp .. _Docker Container: **************** Docker Container **************** The support for docker container in this Software Stack is provided by `docker-ce recipe`_ in the `meta-virtualization`_ layer. Running docker also requires ``kernel-module-xt-nat``, which is enabled by :repo:`meta-armv8r64-extras/dynamic-layers/virtualization-layer/recipes-containers/docker/docker-ce_git.bbappend`. .. _meta-virtualization: https://git.yoctoproject.org/meta-virtualization .. _docker-ce recipe: https://git.yoctoproject.org/meta-virtualization/tree/recipes-containers/docker/docker-ce_git.bb?h=langdale .. _Demo Applications: ***************** Demo Applications ***************** There are two demo applications in this Software Stack to demonstrate these two use scenarios described in the :ref:`High Level Architecture` section in :ref:`Introduction`. * RPMsg Demo (:repo:`components/apps/rpmsg-demo`) * Docker Container Hosted Nginx (:repo:`meta-armv8r64-extras/dynamic-layers/virtualization-layer/recipes-demo/nginx-docker-demo`) These two applications can work together to complete the following process: 1. The program running in Zephyr periodically collects the data of the system running status 2. Zephyr sends sampled data to Linux via RPMsg 3. After receiving the data, Linux stores the data in a local file 4. The Nginx web server running in a docker container serves this file for external users visiting through the HTTP protocol 5. Repeat the above steps so that users can get continuously updated system running status .. _RPMsg Demo: RPMsg Demo ========== The ``RPMsg Demo`` application consists of two parts: * ``rpmsg-host`` runs in the Zephyr domain to sample the data of the system running status and sends the data to ``rpmsg-remote`` * ``rpmsg-remote`` runs in the Linux domain to receive the data and stores it in a local file :file:`/usr/share/nginx/html/zephyr-status.html` These two parts communicate with each other using the OpenAMP framework in the application layer, and the static shared memory and static event channel provided by Xen in the lower layer. The recipes for this demo application are provided by :repo:`meta-armv8r64-extras/dynamic-layers/virtualization-layer/recipes-kernel/zephyr-kernel/zephyr-rpmsg-demo.bb` and :repo:`meta-armv8r64-extras/dynamic-layers/virtualization-layer/recipes-demo/rpmsg-demo/rpmsg-demo.bb`. The source code can be found in directory :repo:`components/apps/rpmsg-demo`. .. _Docker Container Hosted Nginx: Docker Container Hosted Nginx ============================= In the ``Docker Container Hosted Nginx`` demo application, the Nginx server serves the file :file:`/usr/share/nginx/html/zephyr-status.html`, and external users can visit the data in a web browser. This demo application start automatically by default. To start it manually, run script :file:`/usr/share/nginx/utils/run-nginx-docker.sh` in the Linux domain. See the section :ref:`Virtualization` in :ref:`Reproduce` of :ref:`User Guide` for example usage. The recipe for this demo application is provided by :repo:`meta-armv8r64-extras/dynamic-layers/virtualization-layer/recipes-demo/nginx-docker-demo/nginx-docker-demo.bb`. Run Demo Applications ===================== In the default configuration, the above two demo applications run automatically after the system starts. See the section :ref:`Virtualization` in :ref:`Reproduce` of :ref:`User Guide` for example usage. To prevent them from running automatically in the Linux domain, set ``XEN_DOM0LESS_DOM_LINUX_DEMO_AUTORUN`` to ``0`` when build, For example: .. code-block:: shell # Build XEN_DOM0LESS_DOM_LINUX_DEMO_AUTORUN=0 \ kas build v8r64/meta-armv8r64-extras/kas/virtualization.yml And run the demo applications manually using the following commands in the Linux domain: .. code-block:: shell # Start Nginx web server /usr/share/nginx/utils/run-nginx-docker.sh # Start rpmsg-demo rpmsg-remote .. _Limitations and Improvements: Limitations and Improvements ============================ The shared memory and event channel mechanisms that Inter-VM communication relies on are still evolving in Xen, Linux, and Zephyr, which leads to the limitation of the ``RPMsg Demo`` application. The ``RPMsg Demo`` application is mainly to demonstrate the communication between VM guests within the Xen hypervisor. This program does not have a sophisticated fault tolerance and exception recovery mechanism. If an exception occurs, in the extreme case it may be necessary to restart FVP for the next demonstration, especially in the case of running it manually. In the current implementation, at least the following improvements can be made at the application level: * In ``rpmsg-host``, when ``send_message`` fails, the current operation is to exit the loop directly. The improvement here can be to add a retry mechanism. If it continues to fail, it can further fall back to trying to restart ``rpmsg_init_vdev`` to setup a new RPMsg channel. The detailed code is in :repo:`components/apps/rpmsg-demo/demos/zephyr/rpmsg-host/src/main.c`. * Implement ``rpmsg-host`` based on Zephyr IPC subsystem `RPMsg Service`_, which can support multiple endpoints so that it can support multiple RPMsg channels. The reference code can be found `here`__. .. _RPMsg Service: https://github.com/zephyrproject-rtos/zephyr/tree/v3.2.0/subsys/ipc/rpmsg_service .. __: https://github.com/zephyrproject-rtos/zephyr/tree/v3.2.0/samples/subsys/ipc/rpmsg_service