Emulated RAN
gNBsim emulates a 5G RAN, generating (mostly) Control Plane traffic
that can be directed at SD-Core. This section describes how to
configure gNBsim to customize and scale the workload it generates. We
assume gNBsim runs in one or more servers, independent of the
server(s) that host SD-Core. These servers are specified in the
hosts.ini
file, as described in the Scale Cluster section. This blueprint assumes you start with a
variant of vars/main.yml
customized for running gNBsim. This is
easy to do:
$ cd vars
$ cp main-gnbsim.yml main.yml
Configure gNBsim
Two sets of parameters control gNBsim. The first set, found in the
gnbsim
section of vars/main.yml
, controls how gNBsim is
deployed: (1) the number of servers it runs on; (2) the number of
Docker containers running within each server; (3) what configuration
to run in each of those containers; and
(4) how those containers connect to SD-Core. For example, consider the
following variable definitions:
gnbsim:
docker:
container:
image: omecproject/5gc-gnbsim:main-PR_88-cc0d21b
prefix: gnbsim
count: 2
network:
macvlan:
name: gnbnet
router:
data_iface: ens18
macvlan:
iface: gnbaccess
subnet_prefix: "172.20"
servers:
0:
- "config/gnbsim-s1-p1.yaml"
- "config/gnbsim-s1-p2.yaml"
1:
- "config/gnbsim-s2-p1.yaml"
- "config/gnbsim-s2-p2.yaml"
The container.count
variable in the docker
block specifies how
many containers run in each server (2
in this example). The
router
block then gives the network specification needed for these
containers to connect to the SD-Core; all of these variables are
described in the Verify Network
section. Finally, the servers
block names the configuration files
that parameterize each container. In this example, there are two
servers with two containers running in each, with
config/gnbsim-s2-p1.yaml
parameterizing the first container on the
second server.
These config files then specify the second set of gNBsim parameters.
A detailed description of these parameters is outside the scope of
this guide (see https://github.com/omec-project/gnbsim for details),
but at a high-level, gNBsim defines a set of profiles, each of which
exercises a common usage scenario that the Core has to deal with. Each
of these sequences is represented by a profileType
in the config
file. gNBsim supports seven profiles, which we list here:
- profileType: register # UE Registration
- profileType: pdusessest # UE Initiated Session
- profileType: anrelease # Access Network (AN) Release
- profileType: uetriggservicereq # UE Initiated Service Request
- profileType: deregister # UE Initiated De-registration
- profileType: nwtriggeruedereg # Network Initiated De-registration
- profileType: uereqpdusessrelease # UE Initiated Session Release
The second profile (pdusettest
) is selected by default. It causes
the specified number of UEs to register with the Core, initiate a user
plane session, and then send a minimal data packet over that session.
Note that the rest of the per-profile parameters are highly redundant.
For example, they specify the IMSI- and PLMD-related information UEs
need to connect to the Core.
Finally, it is necessary to edit the core
section of
vars/main.yml
to indicate the address at which gNBsim can find the
AMF. For our running example, this would look like the following:
core:
amf: "10.76.28.113"
Install/Uninstall gNBsim
Once you have edited the parameters (and assuming you already have SD-Core running), you are ready to install gNBsim. This includes starting up all the containers and configuring the network so they can reach the Core. This is done from the main OnRamp server you've been using, where you type:
$ make gnbsim-docker-install
$ make aether-gnbsim-install
Note that the first step may not be necessary, depending on whether Docker is already installed on the server(s) you've designated to host gNBsim.
When you are finished, the following uninstalls everything:
$ make aether-gnbsim-uninstall
Run gNBsim
Once gNBsim is installed and the Docker containers instantiated, you can run the simulation by typing
$ make aether-gnbsim-run
This can be done multiple times without reinstalling. For each run, you can use Docker to view the results, which have been saved in each of the containers. To do so, ssh into one of the servers designated to to run gNBsim, and then type
$ docker exec -it gnbsim-1 cat summary.log
Note that container name gnbsim-1
is constructed from the
gnbsim.docker.prefix
variable defined in vars/main.yml
, with
-1
indicating the first container, -2
indicating the second
container, and so on.
In addition to scaling up the workload you put on the Core, you can
also experiment with the emulation settings defined in any or all of
the config files in deps/gnbsim/config/
. Focusing on profile2
in particular (because it sends data packets after registering each
UE), variable defaultAs: "192.168.250.1"
specifies the target of
ICMP Echo Request packets. Changing the value to the IP address of a
real-world server (e.g., 8.8.8.8
) causes the emulated UE to
actually ping that server. Success is a good indication that your
Aether cluster is properly configured to support end-to-end
connectivity.
Yet another option is to configure a single instance of gNBsim to direct
multiple workloads at SD-Core. For example, editing vars/main.yml
to use config/gnbsim-all.yaml
in place of
config/gnbsim-default.yaml
causes gNBsim to activate all the
profiles.