@@ -31,10 +31,11 @@ The hardware on which we have tried this tutorial:
NICs we have tested so far:
|Vendor |Firmware Version |
|------------|------------------------|
|---------------|------------------------|
|Intel X710 |9.20 0x8000d95e 22.0.9 |
|Intel E810 |4.00 0x8001184e 1.3236.0|
|Intel XXV710|6.02 0x80003888 |
|Intel E810-XXV |4.00 0x8001184e 1.3236.0|
|E810-C |4.20 0x8001784e 22.0.9 |
|Intel XXV710 |6.02 0x80003888 |
PTP enabled switches and grandmaster clock we have in are lab:
...
...
@@ -44,8 +45,17 @@ PTP enabled switches and grandmaster clock we have in are lab:
|Fibrolan Falcon-RX/812/G|8.0.25.4 |
|Qulsar Qg2 (Grandmaster)|12.1.27 |
**S-Plane synchronization is mandatory.** S-plane support is done via `ptp4l`
and `phc2sys`.
Radio units we are testing/integrating:
| Software | Software Version |
|-----------|------------------|
| `ptp4l` | 3.1.1 |
| `phc2sys` | 3.1.1 |
We have only verified LLS-C3 configuration in our lab, i.e. using an external
grandmaster, a switch as a boundary clock, and the gNB/DU and RU. We haven't
tested any RU without S-plane. Radio units we are testing/integrating:
|Vendor |Software Version |
|-----------|-----------------|
...
...
@@ -74,14 +84,14 @@ Tested libxran releases:
Your server could be:
* One NUMA Node (See [one NUMA node example](#111-one-numa-node)): all the processors are sharing a single memory system.
* Two NUMA Node (see [two NUMA node example](#112-two-numa-node)): processors are grouped in 2 memory systems.
* One NUMA node (See [one NUMA node example](#111-one-numa-node)): all the processors are sharing a single memory system.
* Two NUMA nodes (see [two NUMA nodes example](#112-two-numa-node)): processors are grouped in 2 memory systems.
- Usually the even (ie `0,2,4,...`) CPUs are on the 1st socket
- And the odd (ie (`1,3,5,...`) CPUs are on the 2nd socket
DPDK, OAI and kernel threads require to be properly allocated to extract maximum real-time performance for your use case.
1.**NOTE**: Currently the default OAI 7.2 configuration file requires isolated **CPUs 0,2,4** for DPDK/libXRAN, **CPU 6** for `ru_thread` and **CPU 8** for `L1_rx_thread`. It is preferrable to have all these threads on the same socket.
1.**NOTE**: Currently the default OAI 7.2 configuration file requires isolated **CPUs 0,2,4** for DPDK/libXRAN, **CPU 6** for `ru_thread`, **CPU 8** for `L1_rx_thread` and **CPU 10** for `L1_tx_thread`. It is preferrable to have all these threads on the same socket.
2. Allocating CPUs to the OAI nr-softmodem is done using the `--thread-pool` option. Allocating 4 CPUs is the minimal configuration but we recommend to allocate at least **8** CPUs. And they can be on a different socket as the DPDK threads.
3. And to avoid kernel preempting these allocated CPUs, it is better to force the kernel to use un-allocated CPUs.
...
...
@@ -92,16 +102,17 @@ Let summarize for example on a `32-CPU` single NUMA node system, regardless of t
|XRAN DPDK usage |0,2,4 |
|OAI `ru_thread` |6 |
|OAI `L1_rx_thread` |8 |
|OAI `L1_tx_thread` |10 |
|OAI `nr-softmodem` |1,3,5,7,9,11,13,15|
|kernel |16-31 |
In below example we have shown the output of `/proc/cmdline` for two different servers, each of them have different number of numa nodes. Be careful in isolating the CPUs in your environment. Apart from CPU allocation there are additional parameters which are important to be present in your boot command.
In below example we have shown the output of `/proc/cmdline` for two different servers, each of them have different number of NUMA nodes. Be careful in isolating the CPUs in your environment. Apart from CPU allocation there are additional parameters which are important to be present in your boot command.
Modifying the `linux` command line usually requires to edit the `/etc/default/grub`, run a `grub` command and reboot the server.
### One NUMA NODE
### One NUMA Node
Below is the output of `/proc/cmdline` of a single numa node server,
Below is the output of `/proc/cmdline` of a single NUMA node server,
```bash
NUMA:
...
...
@@ -115,9 +126,9 @@ isolcpus=0-15 nohz_full=0-15 rcu_nocbs=0-15 kthread_cpus=16-31 rcu_nocb_poll nos
Example taken for AMD EPYC 9374F 32-Core Processor
### Two numa nodes
### Two NUMA Nodes
Below is the output of `/proc/cmdline` of a two numa node server,
Below is the output of `/proc/cmdline` of a two NUMA node server,
*`io_core`: absolute CPU core ID for XRAN library, it should be an isolated core, in our environment we are using CPU 4
*`worker_cores`: array of absolute CPU core IDs for XRAN library, they should be isolated cores, in our environment we are using CPU 2
*`du_addr`: DU C- and U-plane MAC-addresses (format `UU:VV:WW:XX:YY:ZZ`,
hexadecimal numbers)
*`ru_addr`: RU C- and U-plane MAC-addresses (format `UU:VV:WW:XX:YY:ZZ`,
hexadecimal numbers)
*`mtu`: Maximum Transmission Unit for the RU, specified by RU vendor
*`fh_config`: parameters that need to match RU parameters
* timing parameters (starting with `T`) depend on the RU: `Tadv_cp_dl` is a
single number, the rest pairs of numbers `(x, y)` specifying minimum and
maximum delays
*`ru_config`: RU-specific configuration:
*`iq_width`: Width of DL/UL IQ samples: if 16, no compression, if <16, applies
compression
*`iq_width_prach`: Width of PRACH IQ samples: if 16, no compression, if <16, applies
compression
*`fft_size`: size of FFT performed by RU, set to 12 by default
*`prach_config`: PRACH-specific configuration
*`eAxC_offset`: PRACH antenna offset
*`kbar`: the PRACH guard interval, provided in RU
Layer mapping (eAxC offsets) happens as follows:
- For PUSCH/PDSCH, the layers are mapped to `[0,1,...,N-1]` where `N` is the
respective RX/TX number of antennas.
- For PRACH, the layers are mapped to `[No,No+1,...No+N-1]` where No is the
`fhi_72.fh_config.[0].prach_config.eAxC_offset` and `N` the number of receive
antennas.
xRAN SRS reception is not supported.
# Start and Operation of OAI gNB
Run the `nr-softmodem` from the build directory:
```bash
cd ~/openairinterface5g/ran_build/build
sudo ./nr-softmodem -O ../../../targets/PROJECTS/GENERIC-NR-5GC/CONF/oran.fh.band78.fr1.273PRB.conf --sa--reorder-thread-disable 1 --thread-pool <list of non isolated cpus>
```
For example if you have two numa nodes (for example 18 CPU per socket) in your system and odd cores are non isolated then you can put the thread-pool on `1,3,5,7,9,11,13,15`. Else if you have 1 numa node either you can use isolated cores or non isolated. Just make sure that isolated cores are not the ones defined earlier.
You have to set the thread pool option to non-isolated CPUs, since the thread
pool is used for L1 processing which should not interfere with DPDK threads.
For example if you have two NUMA nodes in your system (for example 18 CPUs per
socket) and odd cores are non-isolated, then you can put the thread-pool on
`1,3,5,7,9,11,13,15`. On the other hand, if you have one NUMA node, you can use
either isolated cores or non isolated cores, but make sure that isolated cores
are not the ones defined earlier for DPDK/xran.
<details>
<summary>Once the gNB runs, you should see counters for PDSCH/PUSCH/PRACH per
antenna port, as follows (4x2 configuration):</summary>