-
Cedric Roux authored
This commit improves performances by changing the processing in libchannel_simulator.so. The ultimate goal is to run x2 handover with a real UE and a single physical device. For that we need to run two eNBs, which requires a remote computer. The processing as done before this commit was badly using the network link between the two machines, killing realtime. This commit introduces a TX thread to cut the "read, write, read, write" loop of before. This loop led to a maximum throughput of 90 Mb/s (sum of RX and TX, which are each 45 Mb/s) on a test environment. But for 25 RBs we need around 500 Mb/s for realtime. The problem, I think, is that there is some latency when sending and receiving data to the network device. This is not a problem of TCP (apart from the NODELAY thing). Doing "read, write, read, write" with an UDP socket led to the same bad throughput. Also, we don't send a full subframe at a time, but only 512 samples, which may lead to more latency (sending full subframes was not tested). On the test environment, just running the eNB and the channel_simulator we reach 740 Mb/s with this commit (the link is a direct connection of two machines with 1Gb ethernet; on one machine it's a 'native' ethernet port, on the other an USB 3 <-> ethernet adapter is used; running uplink and downlink iperf and bwm-ng to monitor network, we see the link can do 2 Gb/s cumulated). The UE does not work anymore with this commit because we require some TX all the time. It has also not been tested with TDD mode, only FDD. Later commits may solve these problems.
939aee9a