Commit 9924d9d7 authored by Robert Schmidt's avatar Robert Schmidt

Merge remote-tracking branch 'origin/update_aerial_docs' into integration_2024_w35

parents 450346c7 94050d5e
...@@ -6,7 +6,6 @@ cuBB_Path="${cuBB_SDK:-/opt/nvidia/cuBB}" ...@@ -6,7 +6,6 @@ cuBB_Path="${cuBB_SDK:-/opt/nvidia/cuBB}"
cd "$cuBB_Path" || exit 1 cd "$cuBB_Path" || exit 1
# Add gdrcopy to LD_LIBRARY_PATH # Add gdrcopy to LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/opt/mellanox/dpdk/lib/x86_64-linux-gnu:/opt/mellanox/doca/lib/x86_64-linux-gnu:/opt/nvidia/cuBB/cuPHY-CP/external/gdrcopy/build/x86_64/ export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/opt/mellanox/dpdk/lib/x86_64-linux-gnu:/opt/mellanox/doca/lib/x86_64-linux-gnu:/opt/nvidia/cuBB/cuPHY-CP/external/gdrcopy/build/x86_64/
export LD_LIBRARY_PATH=$LD_LIBRART_PATH:"$cuBB_Path/gpu-dpdk/build/install/lib/x86_64-linux-gnu:$cuBB_Path/build/cuPHY-CP/cuphydriver/src"
# Restart MPS # Restart MPS
# Export variables # Export variables
......
...@@ -50,6 +50,7 @@ The UEs that have been tested and confirmed working with Aerial are the followin ...@@ -50,6 +50,7 @@ The UEs that have been tested and confirmed working with Aerial are the followin
|-----------------|-------------------------------| |-----------------|-------------------------------|
| Sierra Wireless | EM9191 | | Sierra Wireless | EM9191 |
| Quectel | RM500Q-GL | | Quectel | RM500Q-GL |
| Quectel | RM520N-GL |
| Apaltec | Tributo 5G-Dongle | | Apaltec | Tributo 5G-Dongle |
| OnePlus | Nord (AC2003) | | OnePlus | Nord (AC2003) |
| Apple iPhone | 14 Pro (MQ0G3RX/A) (iOS 17.3) | | Apple iPhone | 14 Pro (MQ0G3RX/A) (iOS 17.3) |
...@@ -59,9 +60,9 @@ The UEs that have been tested and confirmed working with Aerial are the followin ...@@ -59,9 +60,9 @@ The UEs that have been tested and confirmed working with Aerial are the followin
The first step is to obtain the NVIDIA Aerial SDK, you'll need to request access [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/aerial-sdk). The first step is to obtain the NVIDIA Aerial SDK, you'll need to request access [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/aerial-sdk).
After obtaining access to the SDK, you can refer to [this instructions page](https://docs.nvidia.com/aerial/aerial-research-cloud/current/index.html) to setup the L1 components using NVIDIAs' SDK manager, or to [this instructions page](https://developer.nvidia.com/docs/gputelecom/aerial-sdk/aerial-sdk-archive/aerial-sdk-23-2/index.html) in order to setup and install the components manually. After obtaining access to the SDK, you can refer to [this instructions page](https://docs.nvidia.com/aerial/aerial-ran-colab-ota/current/index.html) to setup the L1 components using NVIDIAs' SDK manager, or to [this instructions page](https://docs.nvidia.com/aerial/cuda-accelerated-ran/index.html) in order to setup and install the components manually.
The currently used Aerial Version is 23-2, which is also the one currently used by the SDK Manager. The currently used Aerial Version is 24-1, which is also the one currently used by the SDK Manager.
### CPU allocation ### CPU allocation
...@@ -139,7 +140,7 @@ After=ptp4l.service ...@@ -139,7 +140,7 @@ After=ptp4l.service
Restart=always Restart=always
RestartSec=5s RestartSec=5s
Type=simple Type=simple
ExecStart=/usr/sbin/phc2sys -a -r -r -n 24 ExecStart=/usr/sbin/phc2sys -a -r -n 24
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
...@@ -154,7 +155,6 @@ Clone OAI code base in a suitable repository, here we are cloning in `~/openairi ...@@ -154,7 +155,6 @@ Clone OAI code base in a suitable repository, here we are cloning in `~/openairi
```bash ```bash
git clone https://gitlab.eurecom.fr/oai/openairinterface5g.git ~/openairinterface5g git clone https://gitlab.eurecom.fr/oai/openairinterface5g.git ~/openairinterface5g
cd ~/openairinterface5g/ cd ~/openairinterface5g/
git checkout Aerial_Integration
``` ```
## Get nvIPC sources from the L1 container ## Get nvIPC sources from the L1 container
...@@ -174,27 +174,44 @@ If it is not running, you may start it and logging into the container by running ...@@ -174,27 +174,44 @@ If it is not running, you may start it and logging into the container by running
~$ docker start cuBB ~$ docker start cuBB
cuBB cuBB
~$ docker exec -it cuBB bash ~$ docker exec -it cuBB bash
root@c_aerial_aerial:/opt/nvidia/cuBB# aerial@c_aerial_aerial:/opt/nvidia/cuBB#
``` ```
After logging into the container, we need to pack the nvIPC sources and copy them to the host ( the command creates a tar.gz file with the following name format: nvipc_src.yyyy.mm.dd.tar.gz) After logging into the container, we need to pack the nvIPC sources and copy them to the host ( the command creates a tar.gz file with the following name format: nvipc_src.YYYY.MM.DD.tar.gz)
```bash ```bash
~$ docker exec -it cuBB bash ~$ docker exec -it cuBB bash
root@c_aerial_aerial:/opt/nvidia/cuBB# cd cuPHY-CP/gt_common_libs aerial@c_aerial_aerial:/opt/nvidia/cuBB# cd cuPHY-CP/gt_common_libs
root@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs#./pack_nvipc.sh aerial@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs#./pack_nvipc.sh
nvipc_src.****.**.**/ nvipc_src.YYYY.MM.DD/
... ...
--------------------------------------------- ---------------------------------------------
Pack nvipc source code finished: Pack nvipc source code finished:
/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs/nvipc_src.****.**.**.tar.gz /opt/nvidia/cuBB/cuPHY-CP/gt_common_libs/nvipc_src.YYYY.MM.DD.tar.gz
root@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs# cp nvipc_src.****.**.**.tar.gz /opt/cuBB/share/ aerial@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs# sudo cp nvipc_src.yyyy.mm.dd.tar.gz /opt/cuBB/share/
root@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs# exit aerial@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs# exit
``` ```
The file should now be present in the `~/openairinterface5g/ci-scripts/yaml_files/sa_gnb_aerial/` directory, from where it is moved into `~/openairinterface5g` The file should now be present in the `~/openairinterface5g/ci-scripts/yaml_files/sa_gnb_aerial/` directory, from where it is moved into `~/openairinterface5g`
```bash ```bash
~$ mv ~/openairinterface5g/ci-scripts/yaml_files/sa_gnb_aerial/nvipc_src.****.**.**.tar.gz ~/openairinterface5g/ ~$ mv ~/openairinterface5g/ci-scripts/yaml_files/sa_gnb_aerial/nvipc_src.YYYY.MM.DD.tar.gz ~/openairinterface5g/
``` ```
Alternatively, after running `./pack_nvipc.sh` , you can exit the container and copy `nvipc_src.YYYY.MM.DD.tar.gz` using `docker cp`
```bash
~$ docker exec -it cuBB bash
aerial@c_aerial_aerial:/opt/nvidia/cuBB# cd cuPHY-CP/gt_common_libs
aerial@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs#./pack_nvipc.sh
nvipc_src.YYYY.MM.DD/
...
---------------------------------------------
Pack nvipc source code finished:
/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs/nvipc_src.YYYY.MM.DD.tar.gz
aerial@c_aerial_aerial:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs# exit
~$ docker cp nv-cubb:/opt/nvidia/cuBB/cuPHY-CP/gt_common_libs/nvipc_src.YYYY.MM.DD.tar.gz ~/openairinterface5g
```
*Important note:* For using docker cp, make sure to copy the entire name of the created nvipc_src tar.gz file.
With the nvIPC sources in the project directory, the docker image can be built. With the nvIPC sources in the project directory, the docker image can be built.
## Building OAI gNB docker image ## Building OAI gNB docker image
...@@ -213,11 +230,14 @@ The process of preparing the L1 is covered on NVIDIAs' documentation, and falls ...@@ -213,11 +230,14 @@ The process of preparing the L1 is covered on NVIDIAs' documentation, and falls
### Prepare the L1 image ### Prepare the L1 image
After preparing the L1 software, the container needs to be committed in order for an image in which the L1 is ready to be executed, and that can be referenced by a docker-compose.yaml file later: After preparing the L1 software, the container needs to be committed in order for an image in which the L1 is ready to be executed, and that can be referenced by a docker-compose.yaml file later:
*Note:* In preparing the L1 image, the default L1 configuration file is cuphycontroller_P5G_FXN.yaml, located in `/opt/nvidia/cuBB/cuPHY-CP/cuphycontroller/config/`.
This is the file where the RU MAC address needs to be applied before commiting the image
```bash ```bash
~$ docker commit nv-cubb cubb-build:23-2 ~$ docker commit nv-cubb cubb-build:24-1
~$ docker image ls ~$ docker image ls
.. ..
cubb-build 23-2 824156e0334c 2 weeks ago 40.1GB cubb-build 24-1 824156e0334c 2 weeks ago 40.1GB
.. ..
``` ```
...@@ -237,8 +257,12 @@ Both 'GNB_IPV4_ADDRESS_FOR_NG_AMF' and 'GNB_IPV4_ADDRESS_FOR_NGU' need to be set ...@@ -237,8 +257,12 @@ Both 'GNB_IPV4_ADDRESS_FOR_NG_AMF' and 'GNB_IPV4_ADDRESS_FOR_NGU' need to be set
### Running docker compose ### Running docker compose
#### Aerial L1 entrypoint script #### Aerial L1 entrypoint script
Before running docker-compose, we can check which L1 configuration file is to be used by cuphycontroller_scf, this is set in the script [`aerial_l1_entrypoint.sh`](../ci-scripts/yaml_files/sa_gnb_aerial/aerial_l1_entrypoint.sh), which is used by the L1 container in order to start the L1 software, this begins by installing a module 'gdrcopy', setting up some environment variables, restarting NVIDIA MPS, and finally running cuphycontroller with an argument that represent which L1 configuration file is to be used, this argument may be changed by providing an argument in docker-compose.yaml. Before running docker-compose, we can check which L1 configuration file is to be used by cuphycontroller_scf.
When no argument is provided (this is the default behaviour), it uses "P5G_SCF_FXN" as a cuphycontroller_scf argument. This is set in the script [`aerial_l1_entrypoint.sh`](../ci-scripts/yaml_files/sa_gnb_aerial/aerial_l1_entrypoint.sh), which is used by the L1 container in order to start the L1 software.
The script begins by setting up some environment variables, restarting NVIDIA MPS, and finally running cuphycontroller_scf.
The L1 software is run with an argument that indicates which configuration file is to be used.
This argument may be changed by providing an argument to the [`aerial_l1_entrypoint.sh`](../ci-scripts/yaml_files/sa_gnb_aerial/aerial_l1_entrypoint.sh) call in [`docker-compose.yaml`](../ci-scripts/yaml_files/sa_gnb_aerial/docker-compose.yaml).
When no argument is provided (this is the default behaviour), it uses "P5G_FXN" as the cuphycontroller_scf argument.
After building the gNB image, and preparing the configuration file, the setup can be run with the following command: After building the gNB image, and preparing the configuration file, the setup can be run with the following command:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment