Commit c04a2ff4 authored by Remi Hardy's avatar Remi Hardy

Merge remote-tracking branch 'origin/NR_UL_scheduler' into rh_wk50_debug

parents 39d1774a 589ff862
...@@ -70,7 +70,7 @@ if the input is a UE RACH detection ...@@ -70,7 +70,7 @@ if the input is a UE RACH detection
* nr_schedule_msg2() * nr_schedule_msg2()
{: .func4} {: .func4}
* handle_nr_uci() * handle_nr_uci()
???? handles uplink control information, i.e., for the moment HARQ feedback.
{: .func4} {: .func4}
* handle_nr_ulsch() * handle_nr_ulsch()
handles ulsch data prepared by nr_fill_indication() handles ulsch data prepared by nr_fill_indication()
...@@ -143,7 +143,8 @@ the samples numbers are the future time for these samples emission on-air ...@@ -143,7 +143,8 @@ the samples numbers are the future time for these samples emission on-air
{: .func3} {: .func3}
# Scheduler # Scheduler
The scheduler is called by the chain: nr_ul_indication()=>gNB_dlsch_ulsch_scheduler()
The main scheduler function is called by the chain: nr_ul_indication()=>gNB_dlsch_ulsch_scheduler()
It calls sub functions to process each physical channel (rach, ...) It calls sub functions to process each physical channel (rach, ...)
The scheduler uses and internal map of used RB: vrb_map and vrb_map_UL, so each specific channel scheduler can see the already filled RB in each subframe (the function gNB_dlsch_ulsch_scheduler() clears these two arrays when it starts) The scheduler uses and internal map of used RB: vrb_map and vrb_map_UL, so each specific channel scheduler can see the already filled RB in each subframe (the function gNB_dlsch_ulsch_scheduler() clears these two arrays when it starts)
...@@ -153,16 +154,71 @@ it sends a iiti message to activate the thread for RRC, the answer will be async ...@@ -153,16 +154,71 @@ it sends a iiti message to activate the thread for RRC, the answer will be async
Calls schedule_nr_mib() that calls mac_rrc_nr_data_req() to fill MIB, Calls schedule_nr_mib() that calls mac_rrc_nr_data_req() to fill MIB,
Calls each channel allocation: schedule SI, schedule_ul, schedule_dl, ... Calls schedule_nr_prach() which schedules the (fixed) PRACH region one frame in
this is a major entry for "phy-test" mode: in this mode, the allocation is fixed advance.
all these channels goes to mac_rrc_nr_data_req() to get the data to transmit
Calls nr_csi_meas_reporting() to check when to schedule CSI in PUCCH.
nr_schedule_ue_spec() is called
* calls nr_simple_dlsch_preprocessor()=> mac_rlc_status_ind() mac_rlc_status_ind() locks and checks directly inside rlc data the quantity of waiting data. So, the scheduler can allocate RBs Calls nr_schedule_RA(): checks RA process 0's state. Schedules Msg.2 via
* calls nr_update_pucch_scheduling() nr_generate_Msg2() if an RA process is ongoing, and pre-allocates the Msg. 3
* get_pdsch_to_harq_feedback() to schedule retransmission in DL for PUSCH as well.
Calls nr_fill_nfapi_dl_pdu() to actually populate what should be done by the lower layers to make the Tx subframe Calls nr_schedule_ulsch(): It is divided into the "preprocessor" and the
"postprocessor": the first makes the scheduling decisions, the second fills
nFAPI structures to indicate to the PHY what it is supposed to do. To signal
which users have how many resources, the preprocessor populates the
NR_sched_pusch_t (for values changing every TTI, e.g., frequency domain
allocation) and NR_sched_pusch_save_t (for values changing less frequently, at
least in FR1 [to my understanding], e.g., DMRS fields when the time domain
allocation stays between TTIs) structures. Furthermore, the preprocessor is an
exchangeable module that might schedule differently, e.g., one user for
phytest, multiple users in FR1, or maybe FR2: phytest is in
nr_ul_preprocessor_phytest(), for FR1 is nr_simple_ulsch_preprocessor() [under
development], for FR2 does not exist yet.
* calls preprocessor via pre_processor_ul(): the preprocessor is responsible
for allocating CCEs (using allocate_nr_CCEs()). Note that we do not yet have
scheduling requests or buffer status reports, and only one UE. E.g.,
nr_simple_ulsch_preprocessor():
1) check whether the current frame/slot plus K2 is an UL slot, and return if
not.
2) Find first free start RB in vrb_map_UL, and as many free consecutive RBs
as possible.
3) allocate a CCE for the UE (and return if it is not possible)
4) Calculate DMRS stuff (nr_save_pusch_fields()) and the TBS.
5) Mark used resources in vrb_map_UL.
* loop through all users: get a free HARQ PID using select_ul_harq_pid() and
update statistics. Fill nFAPI structures directly for PUSCH, and call
config_uldci() and fill_dci_pdu_rel15() for DCI filling and PDCCH messages.
Calls nr_schedule_ue_spec(). It is divided into the "preprocessor" and the
"postprocessor": the first makes the scheduling decisions, the second fills
nFAPI structures to indicate to the PHY what it is supposed to do. To signal
which users have how many resources, the preprocessor populates the
NR_UE_sched_ctrl_t structure of affected users. In particular, the field rbSize
decides whether a user is to be allocated. Furthermore, the preprocessor is an
exchangeable module that might schedule differently, e.g., one user for
phytest, multiple users in FR1, or maybe FR2: phytest is in
nr_preprocessor_phytest(), for FR1 is nr_simple_dlsch_preprocessor() [under
development], for FR2 does not exist yet.
* calls preprocessor via pre_processor_dl(): the preprocessor is responsible
for allocating CCEs and PUCCH (using allocate_nr_CCEs() and
nr_acknack_scheduling()) and deciding on the frequency/time domain
allocation. E.g., nr_simple_dlsch_preprocessor():
1) mac_rlc_status_ind() locks and checks directly inside rlc data the
quantity of waiting data.
2) return from the preprocessor if there is no data and no timing advance to
send,
3) otherwise, allocate a CCE for the UE (and return if it is not possible)
4) find a PUCCH occasion for HARQ
5a) check if there is a retransmission: if yes, find free resources to
transmit using the same resources, else
5b) calculate the necessary RBs needed to get a TBS large enough to hold all
data, or until no more resources are available
6) Mark taken resources in the vrb_map
* loop through all users: check if a new TA is necessary. Then, if a user has
allocated resources, compute its TBS, and fill nFAPI structures
(nr_fill_nfapi_dl_pdu() to populate what should be done by the lower layers
to make the Tx subframe). Update statistics (round, sent bytes).
# RRC # RRC
RRC is a regular thread with itti loop on queue: TASK_RRC_GNB RRC is a regular thread with itti loop on queue: TASK_RRC_GNB
......
...@@ -407,8 +407,16 @@ int nr_rate_matching_ldpc(uint8_t Ilbrm, ...@@ -407,8 +407,16 @@ int nr_rate_matching_ldpc(uint8_t Ilbrm,
#ifdef RM_DEBUG #ifdef RM_DEBUG
printf("nr_rate_matching_ldpc: E %d, F %d, Foffset %d, k0 %d, Ncb %d, rvidx %d\n", E, F, Foffset,ind, Ncb, rvidx); printf("nr_rate_matching_ldpc: E %d, F %d, Foffset %d, k0 %d, Ncb %d, rvidx %d\n", E, F, Foffset,ind, Ncb, rvidx);
#endif #endif
AssertFatal(Foffset <= E,"Foffset %d > E %d\n",Foffset,E); AssertFatal(Foffset <= E,
AssertFatal(Foffset <= Ncb,"Foffset %d > Ncb %d\n",Foffset,Ncb); "Foffset %d > E %d "
"(Ilbrm %d, Tbslbrm %d, Z %d, BG %d, C %d)\n",
Foffset, E,
Ilbrm, Tbslbrm, Z, BG, C);
AssertFatal(Foffset <= Ncb,
"Foffset %d > Ncb %d "
"(Ilbrm %d, Tbslbrm %d, Z %d, BG %d, C %d)\n",
Foffset, Ncb,
Ilbrm, Tbslbrm, Z, BG, C);
if (ind >= Foffset && ind < (F+Foffset)) ind = F+Foffset; if (ind >= Foffset && ind < (F+Foffset)) ind = F+Foffset;
......
...@@ -860,7 +860,6 @@ int main(int argc, char **argv) ...@@ -860,7 +860,6 @@ int main(int argc, char **argv)
UE_info->UE_sched_ctrl[0].harq_processes[harq_pid].round = round; UE_info->UE_sched_ctrl[0].harq_processes[harq_pid].round = round;
UE_info->UE_sched_ctrl[0].current_harq_pid = harq_pid;
gNB->dlsch[0][0]->harq_processes[harq_pid]->round = round; gNB->dlsch[0][0]->harq_processes[harq_pid]->round = round;
for (int i=0; i<MAX_NUM_CORESET; i++) for (int i=0; i<MAX_NUM_CORESET; i++)
gNB_mac->UE_info.num_pdcch_cand[0][i] = 0; gNB_mac->UE_info.num_pdcch_cand[0][i] = 0;
...@@ -878,7 +877,7 @@ int main(int argc, char **argv) ...@@ -878,7 +877,7 @@ int main(int argc, char **argv)
Sched_INFO.frame = frame; Sched_INFO.frame = frame;
Sched_INFO.slot = slot; Sched_INFO.slot = slot;
Sched_INFO.DL_req = &gNB_mac->DL_req[0]; Sched_INFO.DL_req = &gNB_mac->DL_req[0];
Sched_INFO.UL_tti_req = &gNB_mac->UL_tti_req[0]; Sched_INFO.UL_tti_req = gNB_mac->UL_tti_req_ahead[slot];
Sched_INFO.UL_dci_req = NULL; Sched_INFO.UL_dci_req = NULL;
Sched_INFO.TX_req = &gNB_mac->TX_req[0]; Sched_INFO.TX_req = &gNB_mac->TX_req[0];
nr_schedule_response(&Sched_INFO); nr_schedule_response(&Sched_INFO);
......
...@@ -309,6 +309,7 @@ void config_common(int Mod_idP, int pdsch_AntennaPorts, NR_ServingCellConfigComm ...@@ -309,6 +309,7 @@ void config_common(int Mod_idP, int pdsch_AntennaPorts, NR_ServingCellConfigComm
extern uint16_t sl_ahead;
int rrc_mac_config_req_gNB(module_id_t Mod_idP, int rrc_mac_config_req_gNB(module_id_t Mod_idP,
int ssb_SubcarrierOffset, int ssb_SubcarrierOffset,
int pdsch_AntennaPorts, int pdsch_AntennaPorts,
...@@ -322,6 +323,30 @@ int rrc_mac_config_req_gNB(module_id_t Mod_idP, ...@@ -322,6 +323,30 @@ int rrc_mac_config_req_gNB(module_id_t Mod_idP,
if (scc != NULL ) { if (scc != NULL ) {
AssertFatal((scc->ssb_PositionsInBurst->present > 0) && (scc->ssb_PositionsInBurst->present < 4), "SSB Bitmap type %d is not valid\n",scc->ssb_PositionsInBurst->present); AssertFatal((scc->ssb_PositionsInBurst->present > 0) && (scc->ssb_PositionsInBurst->present < 4), "SSB Bitmap type %d is not valid\n",scc->ssb_PositionsInBurst->present);
/* dimension UL_tti_req_ahead for number of slots in frame */
const uint8_t slots_per_frame[5] = {10, 20, 40, 80, 160};
const int n = slots_per_frame[*scc->ssbSubcarrierSpacing];
RC.nrmac[Mod_idP]->UL_tti_req_ahead[0] = calloc(n, sizeof(nfapi_nr_ul_tti_request_t));
AssertFatal(RC.nrmac[Mod_idP]->UL_tti_req_ahead[0],
"could not allocate memory for RC.nrmac[]->UL_tti_req_ahead[]\n");
/* fill in slot/frame numbers: slot is fixed, frame will be updated by
* scheduler */
for (int i = 0; i < n; ++i) {
nfapi_nr_ul_tti_request_t *req = &RC.nrmac[Mod_idP]->UL_tti_req_ahead[0][i];
/* consider that scheduler runs sl_ahead: the first sl_ahead slots are
* already "in the past" and thus we put frame 1 instead of 0! Note that
* variable sl_ahead seems to not be correctly initialized, but I leave
* it for information purposes here (the fix would always put 0, what
* happens now, too) */
req->SFN = i < sl_ahead;
req->Slot = i;
}
RC.nrmac[Mod_idP]->common_channels[0].vrb_map_UL =
calloc(n * 275, sizeof(uint16_t));
AssertFatal(RC.nrmac[Mod_idP]->common_channels[0].vrb_map_UL,
"could not allocate memory for RC.nrmac[]->common_channels[0].vrb_map_UL\n");
LOG_I(MAC,"Configuring common parameters from NR ServingCellConfig\n"); LOG_I(MAC,"Configuring common parameters from NR ServingCellConfig\n");
config_common(Mod_idP, config_common(Mod_idP,
...@@ -369,6 +394,12 @@ int rrc_mac_config_req_gNB(module_id_t Mod_idP, ...@@ -369,6 +394,12 @@ int rrc_mac_config_req_gNB(module_id_t Mod_idP,
bwpList->list.count); bwpList->list.count);
const int bwp_id = 1; const int bwp_id = 1;
UE_info->UE_sched_ctrl[UE_id].active_bwp = bwpList->list.array[bwp_id - 1]; UE_info->UE_sched_ctrl[UE_id].active_bwp = bwpList->list.array[bwp_id - 1];
struct NR_UplinkConfig__uplinkBWP_ToAddModList *ubwpList =
secondaryCellGroup->spCellConfig->spCellConfigDedicated->uplinkConfig->uplinkBWP_ToAddModList;
AssertFatal(ubwpList->list.count == 1,
"uplinkBWP_ToAddModList has %d BWP!\n",
ubwpList->list.count);
UE_info->UE_sched_ctrl[UE_id].active_ubwp = ubwpList->list.array[bwp_id - 1];
LOG_I(PHY,"Added new UE_id %d/%x with initial secondaryCellGroup\n",UE_id,rnti); LOG_I(PHY,"Added new UE_id %d/%x with initial secondaryCellGroup\n",UE_id,rnti);
} else if (add_ue == 1 && !get_softmodem_params()->phy_test) { } else if (add_ue == 1 && !get_softmodem_params()->phy_test) {
/* TODO: should check for free RA process */ /* TODO: should check for free RA process */
......
...@@ -92,9 +92,12 @@ void clear_nr_nfapi_information(gNB_MAC_INST * gNB, ...@@ -92,9 +92,12 @@ void clear_nr_nfapi_information(gNB_MAC_INST * gNB,
int CC_idP, int CC_idP,
frame_t frameP, frame_t frameP,
sub_frame_t slotP){ sub_frame_t slotP){
NR_ServingCellConfigCommon_t *scc = gNB->common_channels->ServingCellConfigCommon;
const int num_slots = slots_per_frame[*scc->ssbSubcarrierSpacing];
nfapi_nr_dl_tti_request_t *DL_req = &gNB->DL_req[0]; nfapi_nr_dl_tti_request_t *DL_req = &gNB->DL_req[0];
nfapi_nr_ul_tti_request_t *UL_tti_req = &gNB->UL_tti_req[0]; nfapi_nr_ul_tti_request_t *future_ul_tti_req =
&gNB->UL_tti_req_ahead[CC_idP][(slotP + num_slots - 1) % num_slots];
nfapi_nr_ul_dci_request_t *UL_dci_req = &gNB->UL_dci_req[0]; nfapi_nr_ul_dci_request_t *UL_dci_req = &gNB->UL_dci_req[0];
nfapi_nr_tx_data_request_t *TX_req = &gNB->TX_req[0]; nfapi_nr_tx_data_request_t *TX_req = &gNB->TX_req[0];
...@@ -112,12 +115,17 @@ void clear_nr_nfapi_information(gNB_MAC_INST * gNB, ...@@ -112,12 +115,17 @@ void clear_nr_nfapi_information(gNB_MAC_INST * gNB,
UL_dci_req[CC_idP].Slot = slotP; UL_dci_req[CC_idP].Slot = slotP;
UL_dci_req[CC_idP].numPdus = 0; UL_dci_req[CC_idP].numPdus = 0;
UL_tti_req[CC_idP].SFN = frameP; /* advance last round's future UL_tti_req to be ahead of current frame/slot */
UL_tti_req[CC_idP].Slot = slotP; future_ul_tti_req->SFN = (slotP == 0 ? frameP : frameP + 1) % 1024;
UL_tti_req[CC_idP].n_pdus = 0; /* future_ul_tti_req->Slot is fixed! */
UL_tti_req[CC_idP].n_ulsch = 0; future_ul_tti_req->n_pdus = 0;
UL_tti_req[CC_idP].n_ulcch = 0; future_ul_tti_req->n_ulsch = 0;
UL_tti_req[CC_idP].n_group = 0; future_ul_tti_req->n_ulcch = 0;
future_ul_tti_req->n_group = 0;
/* UL_tti_req is a simple pointer into the current UL_tti_req_ahead, i.e.,
* it walks over UL_tti_req_ahead in a circular fashion */
gNB->UL_tti_req[CC_idP] = &gNB->UL_tti_req_ahead[CC_idP][slotP];
TX_req[CC_idP].Number_of_PDUs = 0; TX_req[CC_idP].Number_of_PDUs = 0;
...@@ -285,58 +293,6 @@ void schedule_nr_SRS(module_id_t module_idP, frame_t frameP, sub_frame_t subfram ...@@ -285,58 +293,6 @@ void schedule_nr_SRS(module_id_t module_idP, frame_t frameP, sub_frame_t subfram
*/ */
/*
void copy_nr_ulreq(module_id_t module_idP, frame_t frameP, sub_frame_t slotP)
{
int CC_id;
gNB_MAC_INST *mac = RC.nrmac[module_idP];
for (CC_id = 0; CC_id < MAX_NUM_CCs; CC_id++) {
nfapi_ul_config_request_t *ul_req = &mac->UL_tti_req[CC_id];
*ul_req = *ul_req_tmp;
// Restore the pointer
ul_req->ul_config_request_body.ul_config_pdu_list = ul_req_pdu;
ul_req->sfn_sf = (frameP<<7) + slotP;
ul_req_tmp->ul_config_request_body.number_of_pdus = 0;
if (ul_req->ul_config_request_body.number_of_pdus>0)
{
LOG_D(PHY, "%s() active NOW (frameP:%d slotP:%d) pdus:%d\n", __FUNCTION__, frameP, slotP, ul_req->ul_config_request_body.number_of_pdus);
}
memcpy((void*)ul_req->ul_config_request_body.ul_config_pdu_list,
(void*)ul_req_tmp->ul_config_request_body.ul_config_pdu_list,
ul_req->ul_config_request_body.number_of_pdus*sizeof(nfapi_ul_config_request_pdu_t));
}
}
*/
void nr_schedule_pusch(int Mod_idP,
int UE_id,
int num_slots_per_tdd,
int ul_slots,
frame_t frameP,
sub_frame_t slotP) {
nfapi_nr_ul_tti_request_t *UL_tti_req = &RC.nrmac[Mod_idP]->UL_tti_req[0];
NR_UE_info_t *UE_info = &RC.nrmac[Mod_idP]->UE_info;
int k = slotP + ul_slots - num_slots_per_tdd;
NR_sched_pusch *pusch = &UE_info->UE_sched_ctrl[UE_id].sched_pusch[k];
if ((pusch->active == true) && (frameP == pusch->frame) && (slotP == pusch->slot)) {
UL_tti_req->SFN = pusch->frame;
UL_tti_req->Slot = pusch->slot;
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_type = NFAPI_NR_UL_CONFIG_PUSCH_PDU_TYPE;
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_size = sizeof(nfapi_nr_pusch_pdu_t);
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pusch_pdu = pusch->pusch_pdu;
UL_tti_req->n_pdus+=1;
memset((void *) &UE_info->UE_sched_ctrl[UE_id].sched_pusch[k],
0, sizeof(NR_sched_pusch));
}
}
bool is_xlsch_in_slot(uint64_t bitmap, sub_frame_t slot) { bool is_xlsch_in_slot(uint64_t bitmap, sub_frame_t slot) {
return (bitmap >> slot) & 0x01; return (bitmap >> slot) & 0x01;
} }
...@@ -355,7 +311,6 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP, ...@@ -355,7 +311,6 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP,
gNB_MAC_INST *gNB = RC.nrmac[module_idP]; gNB_MAC_INST *gNB = RC.nrmac[module_idP];
NR_UE_info_t *UE_info = &gNB->UE_info; NR_UE_info_t *UE_info = &gNB->UE_info;
NR_UE_sched_ctrl_t *ue_sched_ctl = &UE_info->UE_sched_ctrl[UE_id];
NR_COMMON_channels_t *cc = gNB->common_channels; NR_COMMON_channels_t *cc = gNB->common_channels;
NR_ServingCellConfigCommon_t *scc = cc->ServingCellConfigCommon; NR_ServingCellConfigCommon_t *scc = cc->ServingCellConfigCommon;
NR_TDD_UL_DL_Pattern_t *tdd_pattern = &scc->tdd_UL_DL_ConfigurationCommon->pattern1; NR_TDD_UL_DL_Pattern_t *tdd_pattern = &scc->tdd_UL_DL_ConfigurationCommon->pattern1;
...@@ -424,7 +379,11 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP, ...@@ -424,7 +379,11 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP,
// clear vrb_maps // clear vrb_maps
memset(cc[CC_id].vrb_map, 0, sizeof(uint16_t) * 275); memset(cc[CC_id].vrb_map, 0, sizeof(uint16_t) * 275);
memset(cc[CC_id].vrb_map_UL, 0, sizeof(uint16_t) * 275); // clear last scheduled slot's content (only)!
const int num_slots = slots_per_frame[*scc->ssbSubcarrierSpacing];
const int last_slot = (slot + num_slots - 1) % num_slots;
uint16_t *vrb_map_UL = cc[CC_id].vrb_map_UL;
memset(&vrb_map_UL[last_slot * 275], 0, sizeof(uint16_t) * 275);
clear_nr_nfapi_information(RC.nrmac[module_idP], CC_id, frame, slot); clear_nr_nfapi_information(RC.nrmac[module_idP], CC_id, frame, slot);
} }
...@@ -437,8 +396,18 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP, ...@@ -437,8 +396,18 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP,
schedule_nr_mib(module_idP, frame, slot, slots_per_frame[*scc->ssbSubcarrierSpacing]); schedule_nr_mib(module_idP, frame, slot, slots_per_frame[*scc->ssbSubcarrierSpacing]);
// This schedule PRACH if we are not in phy_test mode // This schedule PRACH if we are not in phy_test mode
if (get_softmodem_params()->phy_test == 0) if (get_softmodem_params()->phy_test == 0) {
schedule_nr_prach(module_idP, frame, slot); /* we need to make sure that resources for PRACH are free. To avoid that
e.g. PUSCH has already been scheduled, make sure we schedule before
anything else: below, we simply assume an advance one frame (minus one
slot, because otherwise we would allocate the current slot in
UL_tti_req_ahead), but be aware that, e.g., K2 is allowed to be larger
(schedule_nr_prach will assert if resources are not free). */
const sub_frame_t n_slots_ahead = slots_per_frame[*scc->ssbSubcarrierSpacing] - 1;
const frame_t f = (frame + (slot + n_slots_ahead) / slots_per_frame[*scc->ssbSubcarrierSpacing]) % 1024;
const sub_frame_t s = (slot + n_slots_ahead) % slots_per_frame[*scc->ssbSubcarrierSpacing];
schedule_nr_prach(module_idP, f, s);
}
// This schedule SR // This schedule SR
// TODO // TODO
...@@ -449,30 +418,18 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP, ...@@ -449,30 +418,18 @@ void gNB_dlsch_ulsch_scheduler(module_id_t module_idP,
// This schedule RA procedure if not in phy_test mode // This schedule RA procedure if not in phy_test mode
// Otherwise already consider 5G already connected // Otherwise already consider 5G already connected
RC.nrmac[module_idP]->current_slot=slot;
if (get_softmodem_params()->phy_test == 0) { if (get_softmodem_params()->phy_test == 0) {
nr_schedule_RA(module_idP, frame, slot); nr_schedule_RA(module_idP, frame, slot);
nr_schedule_reception_msg3(module_idP, 0, frame, slot);
} }
// This schedules the DCI for Uplink and subsequently PUSCH // This schedules the DCI for Uplink and subsequently PUSCH
if (slot < 10) {
// The decision about whether to schedule is done for each UE independently nr_schedule_ulsch(module_idP, frame, slot, num_slots_per_tdd, nr_ulmix_slots, ulsch_in_slot_bitmap);
// inside
if (UE_info->active[UE_id] && slot < 10) {
int tda = 1; // time domain assignment hardcoded for now
schedule_fapi_ul_pdu(module_idP, frame, slot, num_slots_per_tdd, nr_ulmix_slots, tda, ulsch_in_slot_bitmap);
nr_schedule_pusch(module_idP, UE_id, num_slots_per_tdd, nr_ulmix_slots, frame, slot);
} }
// This schedules the DCI for Downlink and PDSCH
if (UE_info->active[UE_id] if (is_xlsch_in_slot(dlsch_in_slot_bitmap, slot % num_slots_per_tdd)
&& (is_xlsch_in_slot(dlsch_in_slot_bitmap, slot % num_slots_per_tdd))
&& slot < 10) { && slot < 10) {
ue_sched_ctl->current_harq_pid = slot % num_slots_per_tdd;
nr_schedule_ue_spec(module_idP, frame, slot, num_slots_per_tdd); nr_schedule_ue_spec(module_idP, frame, slot, num_slots_per_tdd);
} }
......
...@@ -195,15 +195,17 @@ void find_SSB_and_RO_available(module_id_t module_idP) { ...@@ -195,15 +195,17 @@ void find_SSB_and_RO_available(module_id_t module_idP) {
} }
void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP) { void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP)
{
gNB_MAC_INST *gNB = RC.nrmac[module_idP]; gNB_MAC_INST *gNB = RC.nrmac[module_idP];
NR_COMMON_channels_t *cc = gNB->common_channels; NR_COMMON_channels_t *cc = gNB->common_channels;
NR_ServingCellConfigCommon_t *scc = cc->ServingCellConfigCommon; NR_ServingCellConfigCommon_t *scc = cc->ServingCellConfigCommon;
nfapi_nr_ul_tti_request_t *UL_tti_req = &RC.nrmac[module_idP]->UL_tti_req[0]; nfapi_nr_ul_tti_request_t *UL_tti_req = &RC.nrmac[module_idP]->UL_tti_req_ahead[0][slotP];
nfapi_nr_config_request_scf_t *cfg = &RC.nrmac[module_idP]->config[0]; nfapi_nr_config_request_scf_t *cfg = &RC.nrmac[module_idP]->config[0];
if (is_nr_UL_slot(scc,slotP)) { if (!is_nr_UL_slot(scc,slotP))
return;
uint8_t config_index = scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice.setup->rach_ConfigGeneric.prach_ConfigurationIndex; uint8_t config_index = scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice.setup->rach_ConfigGeneric.prach_ConfigurationIndex;
uint8_t mu,N_dur,N_t_slot,start_symbol = 0,N_RA_slot; uint8_t mu,N_dur,N_t_slot,start_symbol = 0,N_RA_slot;
uint16_t RA_sfn_index = -1; uint16_t RA_sfn_index = -1;
...@@ -219,7 +221,7 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP ...@@ -219,7 +221,7 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP
uint8_t fdm = cfg->prach_config.num_prach_fd_occasions.value; uint8_t fdm = cfg->prach_config.num_prach_fd_occasions.value;
// prach is scheduled according to configuration index and tables 6.3.3.2.2 to 6.3.3.2.4 // prach is scheduled according to configuration index and tables 6.3.3.2.2 to 6.3.3.2.4
if ( get_nr_prach_info_from_index(config_index, if (!get_nr_prach_info_from_index(config_index,
(int)frameP, (int)frameP,
(int)slotP, (int)slotP,
scc->downlinkConfigCommon->frequencyInfoDL->absoluteFrequencyPointA, scc->downlinkConfigCommon->frequencyInfoDL->absoluteFrequencyPointA,
...@@ -231,54 +233,68 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP ...@@ -231,54 +233,68 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP
&N_dur, &N_dur,
&RA_sfn_index, &RA_sfn_index,
&N_RA_slot, &N_RA_slot,
&config_period) ) { &config_period) )
uint16_t format0 = format&0xff; // first column of format from table return;
uint16_t format1 = (format>>8)&0xff; // second column of format from table
if (N_RA_slot > 1) { //more than 1 PRACH slot in a subframe
if (slotP%2 == 1){
slot_index = 1;
}
else {
slot_index = 0;
}
}else if (N_RA_slot <= 1) { //1 PRACH slot in a subframe
slot_index = 0;
}
UL_tti_req->SFN = frameP; uint16_t format0 = format&0xff; // first column of format from table
UL_tti_req->Slot = slotP; uint16_t format1 = (format>>8)&0xff; // second column of format from table
for (int fdm_index=0; fdm_index < fdm; fdm_index++) { // one structure per frequency domain occasion
for (int td_index=0; td_index<N_t_slot; td_index++) {
prach_occasion_id = (((frameP % (cc->max_association_period * config_period))/config_period) * cc->total_prach_occasions_per_config_period) + (RA_sfn_index + slot_index) * N_t_slot * fdm + td_index * fdm + fdm_index; if (N_RA_slot > 1) { // more than 1 PRACH slot in a subframe
if((prach_occasion_id < cc->total_prach_occasions) && (td_index == 0)){ slot_index = slotP % 2 == 1;
} else if (N_RA_slot <= 1) { // 1 PRACH slot in a subframe
slot_index = 0;
}
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_type = NFAPI_NR_UL_CONFIG_PRACH_PDU_TYPE; AssertFatal(UL_tti_req->SFN == frameP && UL_tti_req->Slot == slotP,
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_size = sizeof(nfapi_nr_prach_pdu_t); "%d.%d UL_tti_req frame.slot %d.%d does not match PRACH %d.%d\n",
nfapi_nr_prach_pdu_t *prach_pdu = &UL_tti_req->pdus_list[UL_tti_req->n_pdus].prach_pdu; frameP, slotP,
memset(prach_pdu,0,sizeof(nfapi_nr_prach_pdu_t)); UL_tti_req->SFN,
UL_tti_req->n_pdus+=1; UL_tti_req->Slot,
frameP, slotP);
for (int fdm_index=0; fdm_index < fdm; fdm_index++) { // one structure per frequency domain occasion
for (int td_index = 0; td_index < N_t_slot; td_index++) {
prach_occasion_id = (((frameP % (cc->max_association_period * config_period)) / config_period)
* cc->total_prach_occasions_per_config_period)
+ (RA_sfn_index + slot_index) * N_t_slot * fdm + td_index * fdm + fdm_index;
if (!((prach_occasion_id < cc->total_prach_occasions) && (td_index == 0)))
continue;
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_type =
NFAPI_NR_UL_CONFIG_PRACH_PDU_TYPE;
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_size =
sizeof(nfapi_nr_prach_pdu_t);
nfapi_nr_prach_pdu_t *prach_pdu =
&UL_tti_req->pdus_list[UL_tti_req->n_pdus].prach_pdu;
memset(prach_pdu, 0, sizeof(nfapi_nr_prach_pdu_t));
UL_tti_req->n_pdus += 1;
// filling the prach fapi structure // filling the prach fapi structure
prach_pdu->phys_cell_id = *scc->physCellId; prach_pdu->phys_cell_id = *scc->physCellId;
prach_pdu->num_prach_ocas = N_t_slot; prach_pdu->num_prach_ocas = N_t_slot;
prach_pdu->prach_start_symbol = start_symbol; prach_pdu->prach_start_symbol = start_symbol;
prach_pdu->num_ra = fdm_index; prach_pdu->num_ra = fdm_index;
prach_pdu->num_cs = get_NCS(scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice.setup->rach_ConfigGeneric.zeroCorrelationZoneConfig, prach_pdu->num_cs = get_NCS(
format0, scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice
scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice.setup->restrictedSetConfig); .setup->rach_ConfigGeneric.zeroCorrelationZoneConfig,
format0,
LOG_D(MAC, "Frame %d, Slot %d: Prach Occasion id = %u fdm index = %u start symbol = %u slot index = %u subframe index = %u \n", scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice
frameP, slotP, .setup->restrictedSetConfig);
prach_occasion_id, prach_pdu->num_ra,
prach_pdu->prach_start_symbol, LOG_D(MAC,
slot_index, RA_sfn_index); "Frame %d, Slot %d: Prach Occasion id = %u fdm index = %u start "
"symbol = %u slot index = %u subframe index = %u \n",
frameP,
slotP,
prach_occasion_id,
prach_pdu->num_ra,
prach_pdu->prach_start_symbol,
slot_index,
RA_sfn_index);
// SCF PRACH PDU format field does not consider A1/B1 etc. possibilities // SCF PRACH PDU format field does not consider A1/B1 etc. possibilities
// We added 9 = A1/B1 10 = A2/B2 11 A3/B3 // We added 9 = A1/B1 10 = A2/B2 11 A3/B3
if (format1!=0xff) { if (format1 != 0xff) {
switch(format0) { switch (format0) {
case 0xa1: case 0xa1:
prach_pdu->prach_format = 11; prach_pdu->prach_format = 11;
break; break;
...@@ -288,12 +304,13 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP ...@@ -288,12 +304,13 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP
case 0xa3: case 0xa3:
prach_pdu->prach_format = 13; prach_pdu->prach_format = 13;
break; break;
default: default:
AssertFatal(1==0,"Only formats A1/B1 A2/B2 A3/B3 are valid for dual format"); AssertFatal(
1 == 0,
"Only formats A1/B1 A2/B2 A3/B3 are valid for dual format");
} }
} } else {
else{ switch (format0) {
switch(format0) {
case 0: case 0:
prach_pdu->prach_format = 0; prach_pdu->prach_format = 0;
break; break;
...@@ -327,14 +344,26 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP ...@@ -327,14 +344,26 @@ void schedule_nr_prach(module_id_t module_idP, frame_t frameP, sub_frame_t slotP
case 0xc2: case 0xc2:
prach_pdu->prach_format = 10; prach_pdu->prach_format = 10;
break; break;
default: default:
AssertFatal(1==0,"Invalid PRACH format"); AssertFatal(1 == 0, "Invalid PRACH format");
} }
} }
} const int start_rb = cfg->prach_config.num_prach_fd_occasions_list[fdm_index].k1.value;
const int pusch_mu = scc->uplinkConfigCommon->frequencyInfoUL->scs_SpecificCarrierList.list.array[0]->subcarrierSpacing;
const int num_rb = get_N_RA_RB(cfg->prach_config.prach_sub_c_spacing.value, pusch_mu);
uint16_t *vrb_map_UL =
&RC.nrmac[module_idP]->common_channels[0].vrb_map_UL[slotP * 275];
const uint16_t symb_mask = ((1 << N_dur) - 1) << start_symbol;
for (int i = start_rb; i < start_rb + num_rb; ++i) {
AssertFatal((vrb_map_UL[i] & symb_mask) == 0,
"cannot reserve resources for PRACH: at RB %d, "
"vrb_map_UL %x for symbols %x!\n",
i,
vrb_map_UL[i],
symb_mask);
vrb_map_UL[i] |= symb_mask;
}
} }
}
}
} }
} }
...@@ -562,7 +591,9 @@ void nr_schedule_RA(module_id_t module_idP, frame_t frameP, sub_frame_t slotP){ ...@@ -562,7 +591,9 @@ void nr_schedule_RA(module_id_t module_idP, frame_t frameP, sub_frame_t slotP){
stop_meas(&mac->schedule_ra); stop_meas(&mac->schedule_ra);
} }
void nr_get_Msg3alloc(NR_ServingCellConfigCommon_t *scc, void nr_get_Msg3alloc(module_id_t module_id,
int CC_id,
NR_ServingCellConfigCommon_t *scc,
NR_BWP_Uplink_t *ubwp, NR_BWP_Uplink_t *ubwp,
sub_frame_t current_slot, sub_frame_t current_slot,
frame_t current_frame, frame_t current_frame,
...@@ -596,27 +627,25 @@ void nr_get_Msg3alloc(NR_ServingCellConfigCommon_t *scc, ...@@ -596,27 +627,25 @@ void nr_get_Msg3alloc(NR_ServingCellConfigCommon_t *scc,
ra->Msg3_frame = current_frame + (temp_slot/nr_slots_per_frame[mu]); ra->Msg3_frame = current_frame + (temp_slot/nr_slots_per_frame[mu]);
LOG_I(MAC, "[RAPROC] Msg3 slot %d: current slot %u Msg3 frame %u k2 %u Msg3_tda_id %u start symbol index %u\n", ra->Msg3_slot, current_slot, ra->Msg3_frame, k2,ra->Msg3_tda_id, StartSymbolIndex); LOG_I(MAC, "[RAPROC] Msg3 slot %d: current slot %u Msg3 frame %u k2 %u Msg3_tda_id %u start symbol index %u\n", ra->Msg3_slot, current_slot, ra->Msg3_frame, k2,ra->Msg3_tda_id, StartSymbolIndex);
ra->msg3_nb_rb = 18; uint16_t *vrb_map_UL =
ra->msg3_first_rb = 0; &RC.nrmac[module_id]->common_channels[CC_id].vrb_map_UL[ra->Msg3_slot * 275];
} const uint16_t bwpSize = NRRIV2BW(ubwp->bwp_Common->genericParameters.locationAndBandwidth, 275);
/* search 18 free RBs */
int rbSize = 0;
void nr_schedule_reception_msg3(module_id_t module_idP, int CC_id, frame_t frameP, sub_frame_t slotP){ int rbStart = 0;
gNB_MAC_INST *mac = RC.nrmac[module_idP]; while (rbSize < 18) {
nfapi_nr_ul_tti_request_t *ul_req = &mac->UL_tti_req[0]; rbStart += rbSize; /* last iteration rbSize was not enough, skip it */
NR_COMMON_channels_t *cc = &mac->common_channels[CC_id]; rbSize = 0;
NR_RA_t *ra = &cc->ra[0]; while (rbStart < bwpSize && vrb_map_UL[rbStart])
rbStart++;
if (ra->state == WAIT_Msg3) { AssertFatal(rbStart < bwpSize - 18, "no space to allocate Msg 3 for RA!\n");
if ((frameP == ra->Msg3_frame) && (slotP == ra->Msg3_slot) ){ while (rbStart + rbSize < bwpSize
ul_req->SFN = ra->Msg3_frame; && !vrb_map_UL[rbStart + rbSize]
ul_req->Slot = ra->Msg3_slot; && rbSize < 18)
ul_req->pdus_list[ul_req->n_pdus].pdu_type = NFAPI_NR_UL_CONFIG_PUSCH_PDU_TYPE; rbSize++;
ul_req->pdus_list[ul_req->n_pdus].pdu_size = sizeof(nfapi_nr_pusch_pdu_t);
ul_req->pdus_list[ul_req->n_pdus].pusch_pdu = ra->pusch_pdu;
ul_req->n_pdus+=1;
}
} }
ra->msg3_nb_rb = 18;
ra->msg3_first_rb = rbStart;
} }
void nr_add_msg3(module_id_t module_idP, int CC_id, frame_t frameP, sub_frame_t slotP){ void nr_add_msg3(module_id_t module_idP, int CC_id, frame_t frameP, sub_frame_t slotP){
...@@ -631,10 +660,32 @@ void nr_add_msg3(module_id_t module_idP, int CC_id, frame_t frameP, sub_frame_t ...@@ -631,10 +660,32 @@ void nr_add_msg3(module_id_t module_idP, int CC_id, frame_t frameP, sub_frame_t
return; return;
} }
uint16_t *vrb_map_UL =
&RC.nrmac[module_idP]->common_channels[CC_id].vrb_map_UL[ra->Msg3_slot * 275];
for (int i = 0; i < ra->msg3_nb_rb; ++i) {
AssertFatal(!vrb_map_UL[i + ra->msg3_first_rb],
"RB %d in %4d.%2d is already taken, cannot allocate Msg3!\n",
i + ra->msg3_first_rb,
ra->Msg3_frame,
ra->Msg3_slot);
vrb_map_UL[i + ra->msg3_first_rb] = 1;
}
LOG_I(MAC, "[gNB %d][RAPROC] Frame %d, Subframe %d : CC_id %d RA is active, Msg3 in (%d,%d)\n", module_idP, frameP, slotP, CC_id, ra->Msg3_frame, ra->Msg3_slot); LOG_I(MAC, "[gNB %d][RAPROC] Frame %d, Subframe %d : CC_id %d RA is active, Msg3 in (%d,%d)\n", module_idP, frameP, slotP, CC_id, ra->Msg3_frame, ra->Msg3_slot);
nfapi_nr_pusch_pdu_t *pusch_pdu = &ra->pusch_pdu; nfapi_nr_ul_tti_request_t *future_ul_tti_req = &RC.nrmac[module_idP]->UL_tti_req_ahead[CC_id][ra->Msg3_slot];
AssertFatal(future_ul_tti_req->SFN == ra->Msg3_frame
&& future_ul_tti_req->Slot == ra->Msg3_slot,
"future UL_tti_req's frame.slot %d.%d does not match PUSCH %d.%d\n",
future_ul_tti_req->SFN,
future_ul_tti_req->Slot,
ra->Msg3_frame,
ra->Msg3_slot);
future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pdu_type = NFAPI_NR_UL_CONFIG_PUSCH_PDU_TYPE;
future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pdu_size = sizeof(nfapi_nr_pusch_pdu_t);
nfapi_nr_pusch_pdu_t *pusch_pdu = &future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pusch_pdu;
memset(pusch_pdu, 0, sizeof(nfapi_nr_pusch_pdu_t)); memset(pusch_pdu, 0, sizeof(nfapi_nr_pusch_pdu_t));
future_ul_tti_req->n_pdus += 1;
AssertFatal(ra->secondaryCellGroup, AssertFatal(ra->secondaryCellGroup,
"no secondaryCellGroup for RNTI %04x\n", "no secondaryCellGroup for RNTI %04x\n",
...@@ -913,7 +964,7 @@ void nr_generate_Msg2(module_id_t module_idP, ...@@ -913,7 +964,7 @@ void nr_generate_Msg2(module_id_t module_idP,
dl_req->nPDUs+=2; dl_req->nPDUs+=2;
// Program UL processing for Msg3 // Program UL processing for Msg3
nr_get_Msg3alloc(scc, ubwp, slotP, frameP, ra); nr_get_Msg3alloc(module_idP, CC_id, scc, ubwp, slotP, frameP, ra);
LOG_I(MAC, "Frame %d, Subframe %d: Setting Msg3 reception for Frame %d Subframe %d\n", frameP, slotP, ra->Msg3_frame, ra->Msg3_slot); LOG_I(MAC, "Frame %d, Subframe %d: Setting Msg3 reception for Frame %d Subframe %d\n", frameP, slotP, ra->Msg3_frame, ra->Msg3_slot);
nr_add_msg3(module_idP, CC_id, frameP, slotP); nr_add_msg3(module_idP, CC_id, frameP, slotP);
ra->state = WAIT_Msg3; ra->state = WAIT_Msg3;
......
...@@ -454,7 +454,8 @@ void nr_simple_dlsch_preprocessor(module_id_t module_id, ...@@ -454,7 +454,8 @@ void nr_simple_dlsch_preprocessor(module_id_t module_id,
AssertFatal(sched_ctrl->pucch_sched_idx >= 0, "no uplink slot for PUCCH found!\n"); AssertFatal(sched_ctrl->pucch_sched_idx >= 0, "no uplink slot for PUCCH found!\n");
uint16_t *vrb_map = RC.nrmac[module_id]->common_channels[CC_id].vrb_map; uint16_t *vrb_map = RC.nrmac[module_id]->common_channels[CC_id].vrb_map;
const int current_harq_pid = sched_ctrl->current_harq_pid; // for now HARQ PID is fixed and should be the same as in post-processor
const int current_harq_pid = slot % num_slots_per_tdd;
NR_UE_harq_t *harq = &sched_ctrl->harq_processes[current_harq_pid]; NR_UE_harq_t *harq = &sched_ctrl->harq_processes[current_harq_pid];
NR_UE_ret_info_t *retInfo = &sched_ctrl->retInfo[current_harq_pid]; NR_UE_ret_info_t *retInfo = &sched_ctrl->retInfo[current_harq_pid];
const uint16_t bwpSize = NRRIV2BW(sched_ctrl->active_bwp->bwp_Common->genericParameters.locationAndBandwidth, 275); const uint16_t bwpSize = NRRIV2BW(sched_ctrl->active_bwp->bwp_Common->genericParameters.locationAndBandwidth, 275);
...@@ -586,7 +587,7 @@ void nr_schedule_ue_spec(module_id_t module_id, ...@@ -586,7 +587,7 @@ void nr_schedule_ue_spec(module_id_t module_id,
1 /* nrOfLayers */) 1 /* nrOfLayers */)
>> 3; >> 3;
const int current_harq_pid = sched_ctrl->current_harq_pid; const int current_harq_pid = slot % num_slots_per_tdd;
NR_UE_harq_t *harq = &sched_ctrl->harq_processes[current_harq_pid]; NR_UE_harq_t *harq = &sched_ctrl->harq_processes[current_harq_pid];
NR_sched_pucch *pucch = &sched_ctrl->sched_pucch[sched_ctrl->pucch_sched_idx][sched_ctrl->pucch_occ_idx]; NR_sched_pucch *pucch = &sched_ctrl->sched_pucch[sched_ctrl->pucch_sched_idx][sched_ctrl->pucch_occ_idx];
harq->feedback_slot = pucch->ul_slot; harq->feedback_slot = pucch->ul_slot;
...@@ -639,7 +640,15 @@ void nr_schedule_ue_spec(module_id_t module_id, ...@@ -639,7 +640,15 @@ void nr_schedule_ue_spec(module_id_t module_id,
retInfo->numDmrsCdmGrpsNoData); retInfo->numDmrsCdmGrpsNoData);
/* we do not have to do anything, since we do not require to get data /* we do not have to do anything, since we do not require to get data
* from RLC, encode MAC CEs, or copy data to FAPI structures */ * from RLC, encode MAC CEs, or copy data to FAPI structures */
LOG_W(MAC, "%d.%2d retransmission UE %d/RNTI %04x\n", frame, slot, UE_id, rnti); LOG_W(MAC,
"%d.%2d DL retransmission UE %d/RNTI %04x HARQ PID %d round %d NDI %d\n",
frame,
slot,
UE_id,
rnti,
current_harq_pid,
harq->round,
harq->ndi);
} else { /* initial transmission */ } else { /* initial transmission */
/* reserve space for timing advance of UE if necessary, /* reserve space for timing advance of UE if necessary,
......
...@@ -258,8 +258,6 @@ void nr_preprocessor_phytest(module_id_t module_id, ...@@ -258,8 +258,6 @@ void nr_preprocessor_phytest(module_id_t module_id,
sub_frame_t slot, sub_frame_t slot,
int num_slots_per_tdd) int num_slots_per_tdd)
{ {
if (slot != 1)
return; /* only schedule in slot 1 for now */
NR_UE_info_t *UE_info = &RC.nrmac[module_id]->UE_info; NR_UE_info_t *UE_info = &RC.nrmac[module_id]->UE_info;
const int UE_id = 0; const int UE_id = 0;
const int CC_id = 0; const int CC_id = 0;
...@@ -317,7 +315,7 @@ void nr_preprocessor_phytest(module_id_t module_id, ...@@ -317,7 +315,7 @@ void nr_preprocessor_phytest(module_id_t module_id,
sched_ctrl->coreset = get_coreset( sched_ctrl->coreset = get_coreset(
sched_ctrl->active_bwp, sched_ctrl->search_space, 1 /* dedicated */); sched_ctrl->active_bwp, sched_ctrl->search_space, 1 /* dedicated */);
const int cid = sched_ctrl->coreset->controlResourceSetId; const int cid = sched_ctrl->coreset->controlResourceSetId;
const uint16_t Y = UE_info->Y[UE_id][cid][RC.nrmac[module_id]->current_slot]; const uint16_t Y = UE_info->Y[UE_id][cid][slot];
const int m = UE_info->num_pdcch_cand[UE_id][cid]; const int m = UE_info->num_pdcch_cand[UE_id][cid];
sched_ctrl->cce_index = allocate_nr_CCEs(RC.nrmac[module_id], sched_ctrl->cce_index = allocate_nr_CCEs(RC.nrmac[module_id],
sched_ctrl->active_bwp, sched_ctrl->active_bwp,
...@@ -352,457 +350,131 @@ void nr_preprocessor_phytest(module_id_t module_id, ...@@ -352,457 +350,131 @@ void nr_preprocessor_phytest(module_id_t module_id,
vrb_map[rb + sched_ctrl->rbStart] = 1; vrb_map[rb + sched_ctrl->rbStart] = 1;
} }
void config_uldci(NR_BWP_Uplink_t *ubwp, void nr_ul_preprocessor_phytest(module_id_t module_id,
nfapi_nr_pusch_pdu_t *pusch_pdu, frame_t frame,
nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu_rel15, sub_frame_t slot,
dci_pdu_rel15_t *dci_pdu_rel15, int num_slots_per_tdd,
int *dci_formats, int *rnti_types, uint64_t ulsch_in_slot_bitmap) {
int time_domain_assignment, uint8_t tpc, gNB_MAC_INST *nr_mac = RC.nrmac[module_id];
int n_ubwp, int bwp_id) { NR_COMMON_channels_t *cc = nr_mac->common_channels;
NR_ServingCellConfigCommon_t *scc = cc->ServingCellConfigCommon;
switch(dci_formats[(pdcch_pdu_rel15->numDlDci)-1]) { const int mu = scc->uplinkConfigCommon->initialUplinkBWP->genericParameters.subcarrierSpacing;
case NR_UL_DCI_FORMAT_0_0: NR_UE_info_t *UE_info = &nr_mac->UE_info;
dci_pdu_rel15->frequency_domain_assignment.val = PRBalloc_to_locationandbandwidth0(pusch_pdu->rb_size,
pusch_pdu->rb_start, AssertFatal(UE_info->num_UEs <= 1,
NRRIV2BW(ubwp->bwp_Common->genericParameters.locationAndBandwidth,275)); "%s() cannot handle more than one UE, but found %d\n",
__func__,
dci_pdu_rel15->time_domain_assignment.val = time_domain_assignment; UE_info->num_UEs);
dci_pdu_rel15->frequency_hopping_flag.val = pusch_pdu->frequency_hopping; if (UE_info->num_UEs == 0)
dci_pdu_rel15->mcs = 9; return;
dci_pdu_rel15->format_indicator = 0;
dci_pdu_rel15->ndi = 1;
dci_pdu_rel15->rv = 0;
dci_pdu_rel15->harq_pid = 0;
dci_pdu_rel15->tpc = 1;
break;
case NR_UL_DCI_FORMAT_0_1:
dci_pdu_rel15->ndi = pusch_pdu->pusch_data.new_data_indicator;
dci_pdu_rel15->rv = pusch_pdu->pusch_data.rv_index;
dci_pdu_rel15->harq_pid = pusch_pdu->pusch_data.harq_process_id;
dci_pdu_rel15->frequency_hopping_flag.val = pusch_pdu->frequency_hopping;
dci_pdu_rel15->dai[0].val = 0; //TODO
// bwp indicator
if (n_ubwp < 4)
dci_pdu_rel15->bwp_indicator.val = bwp_id;
else
dci_pdu_rel15->bwp_indicator.val = bwp_id - 1; // as per table 7.3.1.1.2-1 in 38.212
// frequency domain assignment
if (ubwp->bwp_Dedicated->pusch_Config->choice.setup->resourceAllocation==NR_PUSCH_Config__resourceAllocation_resourceAllocationType1)
dci_pdu_rel15->frequency_domain_assignment.val = PRBalloc_to_locationandbandwidth0(pusch_pdu->rb_size,
pusch_pdu->rb_start,
NRRIV2BW(ubwp->bwp_Common->genericParameters.locationAndBandwidth,275));
else
AssertFatal(1==0,"Only frequency resource allocation type 1 is currently supported\n");
// time domain assignment
dci_pdu_rel15->time_domain_assignment.val = time_domain_assignment;
// mcs
dci_pdu_rel15->mcs = pusch_pdu->mcs_index;
// tpc command for pusch
dci_pdu_rel15->tpc = tpc;
// SRS resource indicator
if (ubwp->bwp_Dedicated->pusch_Config->choice.setup->txConfig != NULL) {
if (*ubwp->bwp_Dedicated->pusch_Config->choice.setup->txConfig == NR_PUSCH_Config__txConfig_codebook)
dci_pdu_rel15->srs_resource_indicator.val = 0; // taking resource 0 for SRS
else
AssertFatal(1==0,"Non Codebook configuration non supported\n");
}
// Antenna Ports
dci_pdu_rel15->antenna_ports.val = 0; // TODO for now it is hardcoded, it should depends on cdm group no data and rank
// DMRS sequence initialization
dci_pdu_rel15->dmrs_sequence_initialization.val = pusch_pdu->scid;
break;
default :
AssertFatal(1==0,"Valid UL formats are 0_0 and 0_1 \n");
}
LOG_D(MAC, "[gNB scheduler phytest] ULDCI type 0 payload: PDCCH CCEIndex %d, freq_alloc %d, time_alloc %d, freq_hop_flag %d, mcs %d tpc %d ndi %d rv %d\n",
pdcch_pdu_rel15->dci_pdu.CceIndex[pdcch_pdu_rel15->numDlDci],
dci_pdu_rel15->frequency_domain_assignment.val,
dci_pdu_rel15->time_domain_assignment.val,
dci_pdu_rel15->frequency_hopping_flag.val,
dci_pdu_rel15->mcs,
dci_pdu_rel15->tpc,
dci_pdu_rel15->ndi,
dci_pdu_rel15->rv);
}
int8_t select_ul_harq_pid(NR_UE_sched_ctrl_t *sched_ctrl) {
uint8_t hrq_id;
uint8_t max_ul_harq_pids = 3; // temp: for testing
// schedule active harq processes
NR_UE_ul_harq_t cur_harq;
for (hrq_id=0; hrq_id < max_ul_harq_pids; hrq_id++) {
cur_harq = sched_ctrl->ul_harq_processes[hrq_id];
if (cur_harq.state==ACTIVE_NOT_SCHED) {
#ifdef UL_HARQ_PRINT
printf("[SCHED] Found ulharq id %d, scheduling it for retransmission\n",hrq_id);
#endif
return hrq_id;
}
}
// schedule new harq processes const int UE_id = 0;
for (hrq_id=0; hrq_id < max_ul_harq_pids; hrq_id++) { const int CC_id = 0;
cur_harq = sched_ctrl->ul_harq_processes[hrq_id];
if (cur_harq.state==INACTIVE) {
#ifdef UL_HARQ_PRINT
printf("[SCHED] Found new ulharq id %d, scheduling it\n",hrq_id);
#endif
return hrq_id;
}
}
LOG_E(MAC,"All UL HARQ processes are busy. Cannot schedule ULSCH\n");
return -1;
}
long get_K2(NR_BWP_Uplink_t *ubwp, int time_domain_assignment, int mu) { NR_UE_sched_ctrl_t *sched_ctrl = &UE_info->UE_sched_ctrl[UE_id];
DevAssert(ubwp);
const NR_PUSCH_TimeDomainResourceAllocation_t *tda_list = ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList->list.array[time_domain_assignment];
if (tda_list->k2)
return *tda_list->k2;
else if (mu < 2)
return 1;
else if (mu == 2)
return 2;
else
return 3;
}
void schedule_fapi_ul_pdu(int Mod_idP, const int tda = 1;
frame_t frameP, const struct NR_PUSCH_TimeDomainResourceAllocationList *tdaList =
sub_frame_t slotP, sched_ctrl->active_ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList;
int num_slots_per_tdd, AssertFatal(tda < tdaList->list.count,
int ul_slots, "time domain assignment %d >= %d\n",
int time_domain_assignment, tda,
uint64_t ulsch_in_slot_bitmap) { tdaList->list.count);
int K2 = get_K2(sched_ctrl->active_ubwp, tda, mu);
gNB_MAC_INST *nr_mac = RC.nrmac[Mod_idP]; const int sched_frame = frame + (slot + K2 >= num_slots_per_tdd);
NR_COMMON_channels_t *cc = nr_mac->common_channels; const int sched_slot = (slot + K2) % num_slots_per_tdd;
NR_ServingCellConfigCommon_t *scc = cc->ServingCellConfigCommon; /* check if slot is UL, and that slot is 8 (assuming K2=6 because of UE
int bwp_id=1;
int mu = scc->uplinkConfigCommon->initialUplinkBWP->genericParameters.subcarrierSpacing;
int UE_id = 0;
NR_UE_info_t *UE_info = &RC.nrmac[Mod_idP]->UE_info;
AssertFatal(UE_info->active[UE_id],"Cannot find UE_id %d is not active\n",UE_id);
NR_CellGroupConfig_t *secondaryCellGroup = UE_info->secondaryCellGroup[UE_id];
AssertFatal(secondaryCellGroup->spCellConfig->spCellConfigDedicated->downlinkBWP_ToAddModList->list.count == 1,
"downlinkBWP_ToAddModList has %d BWP!\n",
secondaryCellGroup->spCellConfig->spCellConfigDedicated->downlinkBWP_ToAddModList->list.count);
NR_BWP_Uplink_t *ubwp=secondaryCellGroup->spCellConfig->spCellConfigDedicated->uplinkConfig->uplinkBWP_ToAddModList->list.array[bwp_id-1];
int n_ubwp = secondaryCellGroup->spCellConfig->spCellConfigDedicated->uplinkConfig->uplinkBWP_ToAddModList->list.count;
NR_BWP_Downlink_t *bwp=secondaryCellGroup->spCellConfig->spCellConfigDedicated->downlinkBWP_ToAddModList->list.array[bwp_id-1];
NR_PUSCH_Config_t *pusch_Config = ubwp->bwp_Dedicated->pusch_Config->choice.setup;
AssertFatal(time_domain_assignment<ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList->list.count,
"time_domain_assignment %d>=%d\n",time_domain_assignment,ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList->list.count);
int K2 = get_K2(ubwp, time_domain_assignment,mu);
/* check if slot is UL, and for phy test verify that it is in first TDD
* period, slot 8 (for K2=2, this is at slot 6 in the gNB; because of UE
* limitations). Note that if K2 or the TDD configuration is changed, below * limitations). Note that if K2 or the TDD configuration is changed, below
* conditions might exclude each other and never be true */ * conditions might exclude each other and never be true */
const int slot_idx = (slotP + K2) % num_slots_per_tdd; if (!(is_xlsch_in_slot(ulsch_in_slot_bitmap, sched_slot) && sched_slot == 8))
if (is_xlsch_in_slot(ulsch_in_slot_bitmap, slot_idx) return;
&& (!get_softmodem_params()->phy_test || slot_idx == 8)) {
const uint16_t rbStart = 0;
nfapi_nr_ul_dci_request_t *UL_dci_req = &RC.nrmac[Mod_idP]->UL_dci_req[0]; const uint16_t rbSize = 50; /* due to OAI UE limitations */
UL_dci_req->SFN = frameP; uint16_t *vrb_map_UL =
UL_dci_req->Slot = slotP; &RC.nrmac[module_id]->common_channels[CC_id].vrb_map_UL[sched_slot * 275];
nfapi_nr_ul_dci_request_pdus_t *ul_dci_request_pdu; for (int i = rbStart; i < rbStart + rbSize; ++i) {
if (vrb_map_UL[i]) {
AssertFatal(bwp->bwp_Dedicated->pdcch_Config->choice.setup->searchSpacesToAddModList!=NULL,"searchPsacesToAddModList is null\n"); LOG_E(MAC,
AssertFatal(bwp->bwp_Dedicated->pdcch_Config->choice.setup->searchSpacesToAddModList->list.count>0, "%s(): %4d.%2d RB %d is already reserved, cannot schedule UE\n",
"searchPsacesToAddModList is empty\n"); __func__,
frame,
uint16_t rnti = UE_info->rnti[UE_id]; slot,
i);
int first_ul_slot = num_slots_per_tdd - ul_slots; return;
NR_sched_pusch *pusch_sched = &UE_info->UE_sched_ctrl[UE_id].sched_pusch[slotP+K2-first_ul_slot];
pusch_sched->frame = frameP;
pusch_sched->slot = slotP + K2;
pusch_sched->active = true;
nfapi_nr_pusch_pdu_t *pusch_pdu = &pusch_sched->pusch_pdu;
memset(pusch_pdu,0,sizeof(nfapi_nr_pusch_pdu_t));
LOG_D(MAC, "Scheduling UE specific PUSCH\n");
//UL_tti_req = &nr_mac->UL_tti_req[CC_id];
int dci_formats[2];
int rnti_types[2];
NR_SearchSpace_t *ss;
int target_ss = NR_SearchSpace__searchSpaceType_PR_ue_Specific;
AssertFatal(bwp->bwp_Dedicated->pdcch_Config->choice.setup->searchSpacesToAddModList!=NULL,"searchPsacesToAddModList is null\n");
AssertFatal(bwp->bwp_Dedicated->pdcch_Config->choice.setup->searchSpacesToAddModList->list.count>0,
"searchPsacesToAddModList is empty\n");
int found=0;
for (int i=0;i<bwp->bwp_Dedicated->pdcch_Config->choice.setup->searchSpacesToAddModList->list.count;i++) {
ss=bwp->bwp_Dedicated->pdcch_Config->choice.setup->searchSpacesToAddModList->list.array[i];
AssertFatal(ss->controlResourceSetId != NULL,"ss->controlResourceSetId is null\n");
AssertFatal(ss->searchSpaceType != NULL,"ss->searchSpaceType is null\n");
if (ss->searchSpaceType->present == target_ss) {
found=1;
break;
}
}
AssertFatal(found==1,"Couldn't find an adequate searchspace\n");
if (ss->searchSpaceType->choice.ue_Specific->dci_Formats)
dci_formats[0] = NR_UL_DCI_FORMAT_0_1;
else
dci_formats[0] = NR_UL_DCI_FORMAT_0_0;
rnti_types[0] = NR_RNTI_C;
//Resource Allocation in time domain
int startSymbolAndLength=0;
int StartSymbolIndex,NrOfSymbols,mapping_type;
startSymbolAndLength = ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList->list.array[time_domain_assignment]->startSymbolAndLength;
SLIV2SL(startSymbolAndLength,&StartSymbolIndex,&NrOfSymbols);
pusch_pdu->start_symbol_index = StartSymbolIndex;
pusch_pdu->nr_of_symbols = NrOfSymbols;
mapping_type = ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList->list.array[time_domain_assignment]->mappingType;
pusch_pdu->pdu_bit_map = PUSCH_PDU_BITMAP_PUSCH_DATA;
pusch_pdu->rnti = rnti;
pusch_pdu->handle = 0; //not yet used
pusch_pdu->bwp_size = NRRIV2BW(ubwp->bwp_Common->genericParameters.locationAndBandwidth,275);
pusch_pdu->bwp_start = NRRIV2PRBOFFSET(ubwp->bwp_Common->genericParameters.locationAndBandwidth,275);
pusch_pdu->subcarrier_spacing = ubwp->bwp_Common->genericParameters.subcarrierSpacing;
pusch_pdu->cyclic_prefix = 0;
if (pusch_Config->transformPrecoder == NULL) {
if (scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice.setup->msg3_transformPrecoder == NULL)
pusch_pdu->transform_precoding = 1;
else
pusch_pdu->transform_precoding = 0;
}
else
pusch_pdu->transform_precoding = *pusch_Config->transformPrecoder;
if (pusch_Config->dataScramblingIdentityPUSCH != NULL)
pusch_pdu->data_scrambling_id = *pusch_Config->dataScramblingIdentityPUSCH;
else
pusch_pdu->data_scrambling_id = *scc->physCellId;
pusch_pdu->mcs_index = 9;
if (pusch_pdu->transform_precoding)
pusch_pdu->mcs_table = get_pusch_mcs_table(pusch_Config->mcs_Table, 0,
dci_formats[0], rnti_types[0], target_ss, false);
else
pusch_pdu->mcs_table = get_pusch_mcs_table(pusch_Config->mcs_TableTransformPrecoder, 1,
dci_formats[0], rnti_types[0], target_ss, false);
pusch_pdu->target_code_rate = nr_get_code_rate_ul(pusch_pdu->mcs_index,pusch_pdu->mcs_table);
pusch_pdu->qam_mod_order = nr_get_Qm_ul(pusch_pdu->mcs_index,pusch_pdu->mcs_table);
if (pusch_Config->tp_pi2BPSK!=NULL) {
if(((pusch_pdu->mcs_table==3)&&(pusch_pdu->mcs_index<2)) ||
((pusch_pdu->mcs_table==4)&&(pusch_pdu->mcs_index<6))) {
pusch_pdu->target_code_rate = pusch_pdu->target_code_rate>>1;
pusch_pdu->qam_mod_order = pusch_pdu->qam_mod_order<<1;
}
}
pusch_pdu->nrOfLayers = 1;
//Pusch Allocation in frequency domain [TS38.214, sec 6.1.2.2]
if (pusch_Config->resourceAllocation==NR_PUSCH_Config__resourceAllocation_resourceAllocationType1) {
pusch_pdu->resource_alloc = 1; //type 1
pusch_pdu->rb_start = 0;
if (get_softmodem_params()->phy_test==1)
pusch_pdu->rb_size = min(pusch_pdu->bwp_size,50);
else
pusch_pdu->rb_size = pusch_pdu->bwp_size;
}
else
AssertFatal(1==0,"Only frequency resource allocation type 1 is currently supported\n");
pusch_pdu->vrb_to_prb_mapping = 0;
if (pusch_Config->frequencyHopping==NULL)
pusch_pdu->frequency_hopping = 0;
else
pusch_pdu->frequency_hopping = 1;
//pusch_pdu->tx_direct_current_location;//The uplink Tx Direct Current location for the carrier. Only values in the value range of this field between 0 and 3299, which indicate the subcarrier index within the carrier corresponding 1o the numerology of the corresponding uplink BWP and value 3300, which indicates "Outside the carrier" and value 3301, which indicates "Undetermined position within the carrier" are used. [TS38.331, UplinkTxDirectCurrentBWP IE]
//pusch_pdu->uplink_frequency_shift_7p5khz = 0;
// --------------------
// ------- DMRS -------
// --------------------
NR_DMRS_UplinkConfig_t *NR_DMRS_UplinkConfig;
if (mapping_type == NR_PUSCH_TimeDomainResourceAllocation__mappingType_typeA)
NR_DMRS_UplinkConfig = pusch_Config->dmrs_UplinkForPUSCH_MappingTypeA->choice.setup;
else
NR_DMRS_UplinkConfig = pusch_Config->dmrs_UplinkForPUSCH_MappingTypeB->choice.setup;
if (NR_DMRS_UplinkConfig->dmrs_Type == NULL)
pusch_pdu->dmrs_config_type = 0;
else
pusch_pdu->dmrs_config_type = 1;
pusch_pdu->scid = 0; // DMRS sequence initialization [TS38.211, sec 6.4.1.1.1]
if (pusch_pdu->transform_precoding) { // transform precoding disabled
long *scramblingid;
if (pusch_pdu->scid == 0)
scramblingid = NR_DMRS_UplinkConfig->transformPrecodingDisabled->scramblingID0;
else
scramblingid = NR_DMRS_UplinkConfig->transformPrecodingDisabled->scramblingID1;
if (scramblingid == NULL)
pusch_pdu->ul_dmrs_scrambling_id = *scc->physCellId;
else
pusch_pdu->ul_dmrs_scrambling_id = *scramblingid;
}
else {
pusch_pdu->ul_dmrs_scrambling_id = *scc->physCellId;
if (NR_DMRS_UplinkConfig->transformPrecodingEnabled->nPUSCH_Identity != NULL)
pusch_pdu->pusch_identity = *NR_DMRS_UplinkConfig->transformPrecodingEnabled->nPUSCH_Identity;
else
pusch_pdu->pusch_identity = *scc->physCellId;
}
pusch_dmrs_AdditionalPosition_t additional_pos;
if (NR_DMRS_UplinkConfig->dmrs_AdditionalPosition == NULL)
additional_pos = 2;
else {
if (*NR_DMRS_UplinkConfig->dmrs_AdditionalPosition == NR_DMRS_UplinkConfig__dmrs_AdditionalPosition_pos3)
additional_pos = 3;
else
additional_pos = *NR_DMRS_UplinkConfig->dmrs_AdditionalPosition;
}
pusch_maxLength_t pusch_maxLength;
if (NR_DMRS_UplinkConfig->maxLength == NULL)
pusch_maxLength = 1;
else
pusch_maxLength = 2;
uint16_t l_prime_mask = get_l_prime(pusch_pdu->nr_of_symbols, mapping_type, additional_pos, pusch_maxLength);
pusch_pdu->ul_dmrs_symb_pos = l_prime_mask << pusch_pdu->start_symbol_index;
pusch_pdu->num_dmrs_cdm_grps_no_data = 1;
pusch_pdu->dmrs_ports = 1;
// --------------------------------------------------------------------------------------------------------------------------------------------
// --------------------
// ------- PTRS -------
// --------------------
if (NR_DMRS_UplinkConfig->phaseTrackingRS != NULL) {
// TODO to be fixed from RRC config
uint8_t ptrs_mcs1 = 2; // higher layer parameter in PTRS-UplinkConfig
uint8_t ptrs_mcs2 = 4; // higher layer parameter in PTRS-UplinkConfig
uint8_t ptrs_mcs3 = 10; // higher layer parameter in PTRS-UplinkConfig
uint16_t n_rb0 = 25; // higher layer parameter in PTRS-UplinkConfig
uint16_t n_rb1 = 75; // higher layer parameter in PTRS-UplinkConfig
pusch_pdu->pusch_ptrs.ptrs_time_density = get_L_ptrs(ptrs_mcs1, ptrs_mcs2, ptrs_mcs3, pusch_pdu->mcs_index, pusch_pdu->mcs_table);
pusch_pdu->pusch_ptrs.ptrs_freq_density = get_K_ptrs(n_rb0, n_rb1, pusch_pdu->rb_size);
pusch_pdu->pusch_ptrs.ptrs_ports_list = (nfapi_nr_ptrs_ports_t *) malloc(2*sizeof(nfapi_nr_ptrs_ports_t));
pusch_pdu->pusch_ptrs.ptrs_ports_list[0].ptrs_re_offset = 0;
pusch_pdu->pdu_bit_map |= PUSCH_PDU_BITMAP_PUSCH_PTRS; // enable PUSCH PTRS
}
else{
pusch_pdu->pdu_bit_map &= ~PUSCH_PDU_BITMAP_PUSCH_PTRS; // disable PUSCH PTRS
} }
}
// -------------------------------------------------------------------------------------------------------------------------------------------- sched_ctrl->sched_pusch.slot = sched_slot;
sched_ctrl->sched_pusch.frame = sched_frame;
//Pusch Allocation in frequency domain [TS38.214, sec 6.1.2.2]
//Optional Data only included if indicated in pduBitmap
int8_t harq_id = select_ul_harq_pid(&UE_info->UE_sched_ctrl[UE_id]);
if (harq_id < 0) return;
NR_UE_ul_harq_t *cur_harq = &UE_info->UE_sched_ctrl[UE_id].ul_harq_processes[harq_id];
pusch_pdu->pusch_data.harq_process_id = harq_id;
pusch_pdu->pusch_data.new_data_indicator = cur_harq->ndi;
pusch_pdu->pusch_data.rv_index = nr_rv_round_map[cur_harq->round];
cur_harq->state = ACTIVE_SCHED;
cur_harq->last_tx_slot = pusch_sched->slot;
uint8_t num_dmrs_symb = 0;
for(int dmrs_counter = pusch_pdu->start_symbol_index; dmrs_counter < pusch_pdu->start_symbol_index + pusch_pdu->nr_of_symbols; dmrs_counter++)
num_dmrs_symb += ((pusch_pdu->ul_dmrs_symb_pos >> dmrs_counter) & 1);
uint8_t N_PRB_DMRS;
if (pusch_pdu->dmrs_config_type == 0) {
N_PRB_DMRS = pusch_pdu->num_dmrs_cdm_grps_no_data*6;
}
else {
N_PRB_DMRS = pusch_pdu->num_dmrs_cdm_grps_no_data*4;
}
pusch_pdu->pusch_data.tb_size = nr_compute_tbs(pusch_pdu->qam_mod_order, const int target_ss = NR_SearchSpace__searchSpaceType_PR_ue_Specific;
pusch_pdu->target_code_rate, sched_ctrl->search_space = get_searchspace(sched_ctrl->active_bwp, target_ss);
pusch_pdu->rb_size, uint8_t nr_of_candidates;
pusch_pdu->nr_of_symbols, find_aggregation_candidates(&sched_ctrl->aggregation_level,
N_PRB_DMRS * num_dmrs_symb, &nr_of_candidates,
0, //nb_rb_oh sched_ctrl->search_space);
0, sched_ctrl->coreset = get_coreset(
pusch_pdu->nrOfLayers)>>3; sched_ctrl->active_bwp, sched_ctrl->search_space, 1 /* dedicated */);
const int cid = sched_ctrl->coreset->controlResourceSetId;
UE_info->mac_stats[UE_id].ulsch_rounds[cur_harq->round]++; const uint16_t Y = UE_info->Y[UE_id][cid][slot];
if (cur_harq->round == 0) UE_info->mac_stats[UE_id].ulsch_total_bytes_scheduled+=pusch_pdu->pusch_data.tb_size; const int m = UE_info->num_pdcch_cand[UE_id][cid];
sched_ctrl->cce_index = allocate_nr_CCEs(RC.nrmac[module_id],
pusch_pdu->pusch_data.num_cb = 0; //CBG not supported sched_ctrl->active_bwp,
//pusch_pdu->pusch_data.cb_present_and_position; sched_ctrl->coreset,
//pusch_pdu->pusch_uci; sched_ctrl->aggregation_level,
//pusch_pdu->pusch_ptrs; Y,
//pusch_pdu->dfts_ofdm; m,
//beamforming nr_of_candidates);
//pusch_pdu->beamforming; //not used for now if (sched_ctrl->cce_index < 0) {
LOG_E(MAC, "%s(): CCE list not empty, couldn't schedule PUSCH\n", __func__);
return;
ul_dci_request_pdu = &UL_dci_req->ul_dci_pdu_list[UL_dci_req->numPdus];
memset((void*)ul_dci_request_pdu,0,sizeof(nfapi_nr_ul_dci_request_pdus_t));
ul_dci_request_pdu->PDUType = NFAPI_NR_DL_TTI_PDCCH_PDU_TYPE;
ul_dci_request_pdu->PDUSize = (uint8_t)(2+sizeof(nfapi_nr_dl_tti_pdcch_pdu));
nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu_rel15 = &ul_dci_request_pdu->pdcch_pdu.pdcch_pdu_rel15;
UL_dci_req->numPdus+=1;
LOG_D(MAC,"Configuring ULDCI/PDCCH in %d.%d\n", frameP,slotP);
uint8_t nr_of_candidates, aggregation_level;
find_aggregation_candidates(&aggregation_level, &nr_of_candidates, ss);
NR_ControlResourceSet_t *coreset = get_coreset(bwp, ss, 1 /* dedicated */);
const int cid = coreset->controlResourceSetId;
const uint16_t Y = UE_info->Y[UE_id][cid][nr_mac->current_slot];
const int m = UE_info->num_pdcch_cand[UE_id][cid];
int CCEIndex = allocate_nr_CCEs(nr_mac,
bwp,
coreset,
aggregation_level,
Y,
m,
nr_of_candidates);
if (CCEIndex < 0) {
LOG_E(MAC, "%s(): CCE list not empty, couldn't schedule PUSCH\n", __func__);
pusch_sched->active = false;
return;
}
else {
UE_info->num_pdcch_cand[UE_id][cid]++;
nr_configure_pdcch(nr_mac,
pdcch_pdu_rel15,
UE_info->rnti[UE_id],
ss,
coreset,
scc,
bwp,
aggregation_level,
CCEIndex);
dci_pdu_rel15_t *dci_pdu_rel15 = calloc(MAX_DCI_CORESET,sizeof(dci_pdu_rel15_t));
config_uldci(ubwp,pusch_pdu,pdcch_pdu_rel15,&dci_pdu_rel15[0],dci_formats,rnti_types,time_domain_assignment,UE_info->UE_sched_ctrl[UE_id].tpc0,n_ubwp,bwp_id);
fill_dci_pdu_rel15(scc,secondaryCellGroup,pdcch_pdu_rel15,dci_pdu_rel15,dci_formats,rnti_types,pusch_pdu->bwp_size,bwp_id);
free(dci_pdu_rel15);
}
} }
} UE_info->num_pdcch_cand[UE_id][cid]++;
sched_ctrl->sched_pusch.time_domain_allocation = tda;
const long f = sched_ctrl->search_space->searchSpaceType->choice.ue_Specific->dci_Formats;
const int dci_format = f ? NR_UL_DCI_FORMAT_0_1 : NR_UL_DCI_FORMAT_0_0;
const uint8_t num_dmrs_cdm_grps_no_data = 1;
/* we want to avoid a lengthy deduction of DMRS and other parameters in
* every TTI if we can save it, so check whether dci_format, TDA, or
* num_dmrs_cdm_grps_no_data has changed and only then recompute */
NR_sched_pusch_save_t *ps = &sched_ctrl->pusch_save;
if (ps->time_domain_allocation != tda
|| ps->dci_format != dci_format
|| ps->num_dmrs_cdm_grps_no_data != num_dmrs_cdm_grps_no_data)
nr_save_pusch_fields(scc,
sched_ctrl->active_ubwp,
dci_format,
tda,
num_dmrs_cdm_grps_no_data,
ps);
const int mcs = 9;
NR_sched_pusch_t *sched_pusch = &sched_ctrl->sched_pusch;
sched_pusch->mcs = mcs;
sched_pusch->rbStart = rbStart;
sched_pusch->rbSize = rbSize;
/* Calculate TBS from MCS */
sched_pusch->R = nr_get_code_rate_ul(mcs, ps->mcs_table);
sched_pusch->Qm = nr_get_Qm_ul(mcs, ps->mcs_table);
if (ps->pusch_Config->tp_pi2BPSK
&& ((ps->mcs_table == 3 && mcs < 2) || (ps->mcs_table == 4 && mcs < 6))) {
sched_pusch->R >>= 1;
sched_pusch->Qm <<= 1;
}
sched_pusch->tb_size = nr_compute_tbs(sched_pusch->Qm,
sched_pusch->R,
sched_pusch->rbSize,
ps->nrOfSymbols,
ps->N_PRB_DMRS * ps->num_dmrs_symb,
0, // nb_rb_oh
0,
1 /* NrOfLayers */)
>> 3;
/* mark the corresponding RBs as used */
for (int rb = rbStart; rb < rbStart + rbSize; rb++)
vrb_map_UL[rb] = 1;
}
...@@ -213,6 +213,75 @@ int allocate_nr_CCEs(gNB_MAC_INST *nr_mac, ...@@ -213,6 +213,75 @@ int allocate_nr_CCEs(gNB_MAC_INST *nr_mac,
} }
void nr_save_pusch_fields(const NR_ServingCellConfigCommon_t *scc,
const NR_BWP_Uplink_t *ubwp,
long dci_format,
int tda,
uint8_t num_dmrs_cdm_grps_no_data,
NR_sched_pusch_save_t *ps)
{
ps->dci_format = dci_format;
ps->time_domain_allocation = tda;
ps->num_dmrs_cdm_grps_no_data = num_dmrs_cdm_grps_no_data;
const struct NR_PUSCH_TimeDomainResourceAllocationList *tdaList =
ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList;
const int startSymbolAndLength = tdaList->list.array[tda]->startSymbolAndLength;
SLIV2SL(startSymbolAndLength,
&ps->startSymbolIndex,
&ps->nrOfSymbols);
ps->pusch_Config = ubwp->bwp_Dedicated->pusch_Config->choice.setup;
if (!ps->pusch_Config->transformPrecoder)
ps->transform_precoding = !scc->uplinkConfigCommon->initialUplinkBWP->rach_ConfigCommon->choice.setup->msg3_transformPrecoder;
else
ps->transform_precoding = *ps->pusch_Config->transformPrecoder;
const int target_ss = NR_SearchSpace__searchSpaceType_PR_ue_Specific;
if (ps->transform_precoding)
ps->mcs_table = get_pusch_mcs_table(ps->pusch_Config->mcs_Table,
0,
ps->dci_format,
NR_RNTI_C,
target_ss,
false);
else
ps->mcs_table = get_pusch_mcs_table(ps->pusch_Config->mcs_TableTransformPrecoder,
1,
ps->dci_format,
NR_RNTI_C,
target_ss,
false);
/* DMRS calculations */
ps->mapping_type = tdaList->list.array[tda]->mappingType;
ps->NR_DMRS_UplinkConfig =
ps->mapping_type == NR_PUSCH_TimeDomainResourceAllocation__mappingType_typeA
? ps->pusch_Config->dmrs_UplinkForPUSCH_MappingTypeA->choice.setup
: ps->pusch_Config->dmrs_UplinkForPUSCH_MappingTypeB->choice.setup;
ps->dmrs_config_type = ps->NR_DMRS_UplinkConfig->dmrs_Type == NULL ? 0 : 1;
const pusch_dmrs_AdditionalPosition_t additional_pos =
ps->NR_DMRS_UplinkConfig->dmrs_AdditionalPosition == NULL
? 2
: (*ps->NR_DMRS_UplinkConfig->dmrs_AdditionalPosition ==
NR_DMRS_UplinkConfig__dmrs_AdditionalPosition_pos3
? 3
: *ps->NR_DMRS_UplinkConfig->dmrs_AdditionalPosition);
const pusch_maxLength_t pusch_maxLength =
ps->NR_DMRS_UplinkConfig->maxLength == NULL ? 1 : 2;
const uint16_t l_prime_mask = get_l_prime(ps->nrOfSymbols,
ps->mapping_type,
additional_pos,
pusch_maxLength);
ps->ul_dmrs_symb_pos = l_prime_mask << ps->startSymbolIndex;
uint8_t num_dmrs_symb = 0;
for(int i = ps->startSymbolIndex; i < ps->startSymbolIndex + ps->nrOfSymbols; i++)
num_dmrs_symb += (ps->ul_dmrs_symb_pos >> i) & 1;
ps->num_dmrs_symb = num_dmrs_symb;
ps->N_PRB_DMRS = ps->dmrs_config_type == 0
? num_dmrs_cdm_grps_no_data * 6
: num_dmrs_cdm_grps_no_data * 4;
}
void nr_configure_css_dci_initial(nfapi_nr_dl_tti_pdcch_pdu_rel15_t* pdcch_pdu, void nr_configure_css_dci_initial(nfapi_nr_dl_tti_pdcch_pdu_rel15_t* pdcch_pdu,
nr_scs_e scs_common, nr_scs_e scs_common,
nr_scs_e pdcch_scs, nr_scs_e pdcch_scs,
...@@ -637,6 +706,81 @@ void nr_fill_nfapi_dl_pdu(int Mod_idP, ...@@ -637,6 +706,81 @@ void nr_fill_nfapi_dl_pdu(int Mod_idP,
dl_req->nPDUs += 2; dl_req->nPDUs += 2;
} }
void config_uldci(NR_BWP_Uplink_t *ubwp,
nfapi_nr_pusch_pdu_t *pusch_pdu,
nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu_rel15,
dci_pdu_rel15_t *dci_pdu_rel15,
int *dci_formats,
int time_domain_assignment, uint8_t tpc,
int n_ubwp, int bwp_id) {
const int bw = NRRIV2BW(ubwp->bwp_Common->genericParameters.locationAndBandwidth, 275);
switch (dci_formats[(pdcch_pdu_rel15->numDlDci) - 1]) {
case NR_UL_DCI_FORMAT_0_0:
dci_pdu_rel15->frequency_domain_assignment.val =
PRBalloc_to_locationandbandwidth0(pusch_pdu->rb_size, pusch_pdu->rb_start, bw);
dci_pdu_rel15->time_domain_assignment.val = time_domain_assignment;
dci_pdu_rel15->frequency_hopping_flag.val = pusch_pdu->frequency_hopping;
dci_pdu_rel15->mcs = pusch_pdu->mcs_index;
dci_pdu_rel15->format_indicator = 0;
dci_pdu_rel15->ndi = pusch_pdu->pusch_data.new_data_indicator;
dci_pdu_rel15->rv = pusch_pdu->pusch_data.rv_index;
dci_pdu_rel15->harq_pid = pusch_pdu->pusch_data.harq_process_id;
dci_pdu_rel15->tpc = tpc;
break;
case NR_UL_DCI_FORMAT_0_1:
dci_pdu_rel15->ndi = pusch_pdu->pusch_data.new_data_indicator;
dci_pdu_rel15->rv = pusch_pdu->pusch_data.rv_index;
dci_pdu_rel15->harq_pid = pusch_pdu->pusch_data.harq_process_id;
dci_pdu_rel15->frequency_hopping_flag.val = pusch_pdu->frequency_hopping;
dci_pdu_rel15->dai[0].val = 0; //TODO
// bwp indicator
if (n_ubwp < 4)
dci_pdu_rel15->bwp_indicator.val = bwp_id;
else
dci_pdu_rel15->bwp_indicator.val = bwp_id - 1; // as per table 7.3.1.1.2-1 in 38.212
// frequency domain assignment
AssertFatal(ubwp->bwp_Dedicated->pusch_Config->choice.setup->resourceAllocation
== NR_PUSCH_Config__resourceAllocation_resourceAllocationType1,
"Only frequency resource allocation type 1 is currently supported\n");
dci_pdu_rel15->frequency_domain_assignment.val =
PRBalloc_to_locationandbandwidth0(pusch_pdu->rb_size, pusch_pdu->rb_start, bw);
// time domain assignment
dci_pdu_rel15->time_domain_assignment.val = time_domain_assignment;
// mcs
dci_pdu_rel15->mcs = pusch_pdu->mcs_index;
// tpc command for pusch
dci_pdu_rel15->tpc = tpc;
// SRS resource indicator
if (ubwp->bwp_Dedicated->pusch_Config->choice.setup->txConfig != NULL) {
AssertFatal(*ubwp->bwp_Dedicated->pusch_Config->choice.setup->txConfig == NR_PUSCH_Config__txConfig_codebook,
"Non Codebook configuration non supported\n");
dci_pdu_rel15->srs_resource_indicator.val = 0; // taking resource 0 for SRS
}
// Antenna Ports
dci_pdu_rel15->antenna_ports.val = 0; // TODO for now it is hardcoded, it should depends on cdm group no data and rank
// DMRS sequence initialization
dci_pdu_rel15->dmrs_sequence_initialization.val = pusch_pdu->scid;
break;
default :
AssertFatal(0, "Valid UL formats are 0_0 and 0_1\n");
}
LOG_D(MAC,
"%s() ULDCI type 0 payload: PDCCH CCEIndex %d, freq_alloc %d, "
"time_alloc %d, freq_hop_flag %d, mcs %d tpc %d ndi %d rv %d\n",
__func__,
pdcch_pdu_rel15->dci_pdu.CceIndex[pdcch_pdu_rel15->numDlDci],
dci_pdu_rel15->frequency_domain_assignment.val,
dci_pdu_rel15->time_domain_assignment.val,
dci_pdu_rel15->frequency_hopping_flag.val,
dci_pdu_rel15->mcs,
dci_pdu_rel15->tpc,
dci_pdu_rel15->ndi,
dci_pdu_rel15->rv);
}
void nr_configure_pdcch(gNB_MAC_INST *nr_mac, void nr_configure_pdcch(gNB_MAC_INST *nr_mac,
nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu, nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu,
uint16_t rnti, uint16_t rnti,
...@@ -1721,20 +1865,17 @@ int add_new_nr_ue(module_id_t mod_idP, rnti_t rntiP){ ...@@ -1721,20 +1865,17 @@ int add_new_nr_ue(module_id_t mod_idP, rnti_t rntiP){
UE_info->UE_sched_ctrl[UE_id].ta_update = 31; UE_info->UE_sched_ctrl[UE_id].ta_update = 31;
UE_info->UE_sched_ctrl[UE_id].ta_apply = false; UE_info->UE_sched_ctrl[UE_id].ta_apply = false;
UE_info->UE_sched_ctrl[UE_id].ul_rssi = 0; UE_info->UE_sched_ctrl[UE_id].ul_rssi = 0;
/* set illegal time domain allocation to force recomputation of all fields */
UE_info->UE_sched_ctrl[UE_id].pusch_save.time_domain_allocation = -1;
UE_info->UE_sched_ctrl[UE_id].sched_pucch = (NR_sched_pucch **)malloc(num_slots_ul*sizeof(NR_sched_pucch *)); UE_info->UE_sched_ctrl[UE_id].sched_pucch = (NR_sched_pucch **)malloc(num_slots_ul*sizeof(NR_sched_pucch *));
for (int s=0; s<num_slots_ul;s++) for (int s=0; s<num_slots_ul;s++)
UE_info->UE_sched_ctrl[UE_id].sched_pucch[s] = (NR_sched_pucch *)malloc(2*sizeof(NR_sched_pucch)); UE_info->UE_sched_ctrl[UE_id].sched_pucch[s] = (NR_sched_pucch *)malloc(2*sizeof(NR_sched_pucch));
UE_info->UE_sched_ctrl[UE_id].sched_pusch = (NR_sched_pusch *)malloc(num_slots_ul*sizeof(NR_sched_pusch));
for (int k=0; k<num_slots_ul; k++) { for (int k=0; k<num_slots_ul; k++) {
for (int l=0; l<2; l++) for (int l=0; l<2; l++)
memset((void *) &UE_info->UE_sched_ctrl[UE_id].sched_pucch[k][l], memset((void *) &UE_info->UE_sched_ctrl[UE_id].sched_pucch[k][l],
0, 0,
sizeof(NR_sched_pucch)); sizeof(NR_sched_pucch));
memset((void *) &UE_info->UE_sched_ctrl[UE_id].sched_pusch[k],
0,
sizeof(NR_sched_pusch));
} }
LOG_I(MAC, "gNB %d] Add NR UE_id %d : rnti %x\n", LOG_I(MAC, "gNB %d] Add NR UE_id %d : rnti %x\n",
mod_idP, mod_idP,
...@@ -1776,7 +1917,6 @@ void mac_remove_nr_ue(module_id_t mod_id, rnti_t rnti) ...@@ -1776,7 +1917,6 @@ void mac_remove_nr_ue(module_id_t mod_id, rnti_t rnti)
UE_info->rnti[UE_id] = 0; UE_info->rnti[UE_id] = 0;
remove_nr_ue_list(&UE_info->list, UE_id); remove_nr_ue_list(&UE_info->list, UE_id);
free(UE_info->UE_sched_ctrl[UE_id].sched_pucch); free(UE_info->UE_sched_ctrl[UE_id].sched_pucch);
free(UE_info->UE_sched_ctrl[UE_id].sched_pusch);
memset((void *) &UE_info->UE_sched_ctrl[UE_id], memset((void *) &UE_info->UE_sched_ctrl[UE_id],
0, 0,
sizeof(NR_UE_sched_ctrl_t)); sizeof(NR_UE_sched_ctrl_t));
......
...@@ -39,51 +39,59 @@ void nr_schedule_pucch(int Mod_idP, ...@@ -39,51 +39,59 @@ void nr_schedule_pucch(int Mod_idP,
int nr_ulmix_slots, int nr_ulmix_slots,
frame_t frameP, frame_t frameP,
sub_frame_t slotP) { sub_frame_t slotP) {
uint16_t O_csi, O_ack, O_uci;
uint8_t O_sr = 0; // no SR in PUCCH implemented for now
NR_ServingCellConfigCommon_t *scc = RC.nrmac[Mod_idP]->common_channels->ServingCellConfigCommon;
NR_UE_info_t *UE_info = &RC.nrmac[Mod_idP]->UE_info; NR_UE_info_t *UE_info = &RC.nrmac[Mod_idP]->UE_info;
AssertFatal(UE_info->active[UE_id],"Cannot find UE_id %d is not active\n",UE_id); AssertFatal(UE_info->active[UE_id],"Cannot find UE_id %d is not active\n",UE_id);
NR_CellGroupConfig_t *secondaryCellGroup = UE_info->secondaryCellGroup[UE_id];
int bwp_id=1;
NR_BWP_Uplink_t *ubwp=secondaryCellGroup->spCellConfig->spCellConfigDedicated->uplinkConfig->uplinkBWP_ToAddModList->list.array[bwp_id-1];
nfapi_nr_ul_tti_request_t *UL_tti_req = &RC.nrmac[Mod_idP]->UL_tti_req[0];
NR_sched_pucch *curr_pucch;
for (int k=0; k<nr_ulmix_slots; k++) { for (int k=0; k<nr_ulmix_slots; k++) {
for (int l=0; l<2; l++) { for (int l=0; l<2; l++) {
curr_pucch = &UE_info->UE_sched_ctrl[UE_id].sched_pucch[k][l]; NR_sched_pucch *curr_pucch = &UE_info->UE_sched_ctrl[UE_id].sched_pucch[k][l];
O_ack = curr_pucch->dai_c; const uint16_t O_ack = curr_pucch->dai_c;
O_csi = curr_pucch->csi_bits; const uint16_t O_csi = curr_pucch->csi_bits;
O_uci = O_ack + O_csi + O_sr; const uint8_t O_sr = 0; // no SR in PUCCH implemented for now
if ((O_uci>0) && (frameP == curr_pucch->frame) && (slotP == curr_pucch->ul_slot)) { if (O_ack + O_csi + O_sr == 0
UL_tti_req->SFN = curr_pucch->frame; || frameP != curr_pucch->frame
UL_tti_req->Slot = curr_pucch->ul_slot; || slotP != curr_pucch->ul_slot)
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_type = NFAPI_NR_UL_CONFIG_PUCCH_PDU_TYPE; continue;
UL_tti_req->pdus_list[UL_tti_req->n_pdus].pdu_size = sizeof(nfapi_nr_pucch_pdu_t);
nfapi_nr_pucch_pdu_t *pucch_pdu = &UL_tti_req->pdus_list[UL_tti_req->n_pdus].pucch_pdu; nfapi_nr_ul_tti_request_t *future_ul_tti_req =
memset(pucch_pdu,0,sizeof(nfapi_nr_pucch_pdu_t)); &RC.nrmac[Mod_idP]->UL_tti_req_ahead[0][curr_pucch->ul_slot];
UL_tti_req->n_pdus+=1; AssertFatal(future_ul_tti_req->SFN == curr_pucch->frame
&& future_ul_tti_req->Slot == curr_pucch->ul_slot,
LOG_D(MAC,"Scheduling pucch reception for frame %d slot %d with (%d, %d, %d) (SR ACK, CSI) bits\n", "future UL_tti_req's frame.slot %d.%d does not match PUCCH %d.%d\n",
frameP,slotP,O_sr,O_ack,curr_pucch->csi_bits); future_ul_tti_req->SFN,
future_ul_tti_req->Slot,
nr_configure_pucch(pucch_pdu, curr_pucch->frame,
scc, curr_pucch->ul_slot);
ubwp, future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pdu_type = NFAPI_NR_UL_CONFIG_PUCCH_PDU_TYPE;
UE_info->rnti[UE_id], future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pdu_size = sizeof(nfapi_nr_pucch_pdu_t);
curr_pucch->resource_indicator, nfapi_nr_pucch_pdu_t *pucch_pdu = &future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pucch_pdu;
O_csi, memset(pucch_pdu, 0, sizeof(nfapi_nr_pucch_pdu_t));
O_ack, future_ul_tti_req->n_pdus += 1;
O_sr);
LOG_D(MAC,
memset((void *) &UE_info->UE_sched_ctrl[UE_id].sched_pucch[k][l], "%4d.%2d Scheduling pucch reception in %4d.%2d: bits SR %d, ACK %d, CSI %d, k %d l %d\n",
0, frameP,
sizeof(NR_sched_pucch)); slotP,
} curr_pucch->frame,
curr_pucch->ul_slot,
O_sr,
O_ack,
O_csi,
k, l);
NR_ServingCellConfigCommon_t *scc = RC.nrmac[Mod_idP]->common_channels->ServingCellConfigCommon;
nr_configure_pucch(pucch_pdu,
scc,
UE_info->UE_sched_ctrl[UE_id].active_ubwp,
UE_info->rnti[UE_id],
curr_pucch->resource_indicator,
O_csi,
O_ack,
O_sr);
memset(&UE_info->UE_sched_ctrl[UE_id].sched_pucch[k][l],
0,
sizeof(NR_sched_pucch));
} }
} }
} }
......
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
#include "LAYER2/NR_MAC_gNB/mac_proto.h" #include "LAYER2/NR_MAC_gNB/mac_proto.h"
#include "executables/softmodem-common.h" #include "executables/softmodem-common.h"
//#define ENABLE_MAC_PAYLOAD_DEBUG 1 #include "common/utils/nr/nr_common.h"
void nr_process_mac_pdu( void nr_process_mac_pdu(
...@@ -399,6 +399,11 @@ void nr_rx_sdu(const module_id_t gnb_mod_idP, ...@@ -399,6 +399,11 @@ void nr_rx_sdu(const module_id_t gnb_mod_idP,
bwpList->list.count); bwpList->list.count);
const int bwp_id = 1; const int bwp_id = 1;
UE_info->UE_sched_ctrl[UE_id].active_bwp = bwpList->list.array[bwp_id - 1]; UE_info->UE_sched_ctrl[UE_id].active_bwp = bwpList->list.array[bwp_id - 1];
struct NR_UplinkConfig__uplinkBWP_ToAddModList *ubwpList = ra->secondaryCellGroup->spCellConfig->spCellConfigDedicated->uplinkConfig->uplinkBWP_ToAddModList;
AssertFatal(ubwpList->list.count == 1,
"uplinkBWP_ToAddModList has %d BWP!\n",
ubwpList->list.count);
UE_info->UE_sched_ctrl[UE_id].active_ubwp = ubwpList->list.array[bwp_id - 1];
LOG_I(MAC, LOG_I(MAC,
"[gNB %d][RAPROC] PUSCH with TC_RNTI %x received correctly, " "[gNB %d][RAPROC] PUSCH with TC_RNTI %x received correctly, "
"adding UE MAC Context UE_id %d/RNTI %04x\n", "adding UE MAC Context UE_id %d/RNTI %04x\n",
...@@ -418,5 +423,385 @@ void nr_rx_sdu(const module_id_t gnb_mod_idP, ...@@ -418,5 +423,385 @@ void nr_rx_sdu(const module_id_t gnb_mod_idP,
return; return;
} }
} }
}
long get_K2(NR_BWP_Uplink_t *ubwp, int time_domain_assignment, int mu) {
DevAssert(ubwp);
const NR_PUSCH_TimeDomainResourceAllocation_t *tda_list = ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList->list.array[time_domain_assignment];
if (tda_list->k2)
return *tda_list->k2;
else if (mu < 2)
return 1;
else if (mu == 2)
return 2;
else
return 3;
}
int8_t select_ul_harq_pid(NR_UE_sched_ctrl_t *sched_ctrl) {
const uint8_t max_ul_harq_pids = 3; // temp: for testing
// schedule active harq processes
for (uint8_t hrq_id = 0; hrq_id < max_ul_harq_pids; hrq_id++) {
NR_UE_ul_harq_t *cur_harq = &sched_ctrl->ul_harq_processes[hrq_id];
if (cur_harq->state == ACTIVE_NOT_SCHED) {
LOG_D(MAC, "Found ulharq id %d, scheduling it for retransmission\n", hrq_id);
return hrq_id;
}
}
// schedule new harq processes
for (uint8_t hrq_id=0; hrq_id < max_ul_harq_pids; hrq_id++) {
NR_UE_ul_harq_t *cur_harq = &sched_ctrl->ul_harq_processes[hrq_id];
if (cur_harq->state == INACTIVE) {
LOG_D(MAC, "Found new ulharq id %d, scheduling it\n", hrq_id);
return hrq_id;
}
}
LOG_E(MAC, "All UL HARQ processes are busy. Cannot schedule ULSCH\n");
return -1;
}
void nr_simple_ulsch_preprocessor(module_id_t module_id,
frame_t frame,
sub_frame_t slot,
int num_slots_per_tdd,
uint64_t ulsch_in_slot_bitmap) {
gNB_MAC_INST *nr_mac = RC.nrmac[module_id];
NR_COMMON_channels_t *cc = nr_mac->common_channels;
NR_ServingCellConfigCommon_t *scc = cc->ServingCellConfigCommon;
const int mu = scc->uplinkConfigCommon->initialUplinkBWP->genericParameters.subcarrierSpacing;
NR_UE_info_t *UE_info = &nr_mac->UE_info;
AssertFatal(UE_info->num_UEs <= 1,
"%s() cannot handle more than one UE, but found %d\n",
__func__,
UE_info->num_UEs);
if (UE_info->num_UEs == 0)
return;
const int UE_id = 0;
const int CC_id = 0;
NR_UE_sched_ctrl_t *sched_ctrl = &UE_info->UE_sched_ctrl[UE_id];
const int tda = 1;
const struct NR_PUSCH_TimeDomainResourceAllocationList *tdaList =
sched_ctrl->active_ubwp->bwp_Common->pusch_ConfigCommon->choice.setup->pusch_TimeDomainAllocationList;
AssertFatal(tda < tdaList->list.count,
"time domain assignment %d >= %d\n",
tda,
tdaList->list.count);
int K2 = get_K2(sched_ctrl->active_ubwp, tda, mu);
const int sched_frame = frame + (slot + K2 >= num_slots_per_tdd);
const int sched_slot = (slot + K2) % num_slots_per_tdd;
if (!is_xlsch_in_slot(ulsch_in_slot_bitmap, sched_slot))
return;
/* get first, largest unallocated region */
uint16_t *vrb_map_UL =
&RC.nrmac[module_id]->common_channels[CC_id].vrb_map_UL[sched_slot * 275];
uint16_t rbStart = 0;
while (vrb_map_UL[rbStart]) rbStart++;
const uint16_t bwpSize = NRRIV2BW(sched_ctrl->active_ubwp->bwp_Common->genericParameters.locationAndBandwidth,275);
uint16_t rbSize = 1;
while (rbStart + rbSize < bwpSize && !vrb_map_UL[rbStart+rbSize])
rbSize++;
sched_ctrl->sched_pusch.slot = sched_slot;
sched_ctrl->sched_pusch.frame = sched_frame;
const int target_ss = NR_SearchSpace__searchSpaceType_PR_ue_Specific;
sched_ctrl->search_space = get_searchspace(sched_ctrl->active_bwp, target_ss);
uint8_t nr_of_candidates;
find_aggregation_candidates(&sched_ctrl->aggregation_level,
&nr_of_candidates,
sched_ctrl->search_space);
sched_ctrl->coreset = get_coreset(
sched_ctrl->active_bwp, sched_ctrl->search_space, 1 /* dedicated */);
const int cid = sched_ctrl->coreset->controlResourceSetId;
const uint16_t Y = UE_info->Y[UE_id][cid][slot];
const int m = UE_info->num_pdcch_cand[UE_id][cid];
sched_ctrl->cce_index = allocate_nr_CCEs(RC.nrmac[module_id],
sched_ctrl->active_bwp,
sched_ctrl->coreset,
sched_ctrl->aggregation_level,
Y,
m,
nr_of_candidates);
if (sched_ctrl->cce_index < 0) {
LOG_E(MAC, "%s(): CCE list not empty, couldn't schedule PUSCH\n", __func__);
return;
}
UE_info->num_pdcch_cand[UE_id][cid]++;
sched_ctrl->sched_pusch.time_domain_allocation = tda;
const long f = sched_ctrl->search_space->searchSpaceType->choice.ue_Specific->dci_Formats;
const int dci_format = f ? NR_UL_DCI_FORMAT_0_1 : NR_UL_DCI_FORMAT_0_0;
const uint8_t num_dmrs_cdm_grps_no_data = 1;
/* we want to avoid a lengthy deduction of DMRS and other parameters in
* every TTI if we can save it, so check whether dci_format, TDA, or
* num_dmrs_cdm_grps_no_data has changed and only then recompute */
NR_sched_pusch_save_t *ps = &sched_ctrl->pusch_save;
if (ps->time_domain_allocation != tda
|| ps->dci_format != dci_format
|| ps->num_dmrs_cdm_grps_no_data != num_dmrs_cdm_grps_no_data)
nr_save_pusch_fields(scc,
sched_ctrl->active_ubwp,
dci_format,
tda,
num_dmrs_cdm_grps_no_data,
ps);
const int mcs = 9;
NR_sched_pusch_t *sched_pusch = &sched_ctrl->sched_pusch;
sched_pusch->mcs = mcs;
sched_pusch->rbStart = rbStart;
sched_pusch->rbSize = rbSize;
/* Calculate TBS from MCS */
sched_pusch->R = nr_get_code_rate_ul(mcs, ps->mcs_table);
sched_pusch->Qm = nr_get_Qm_ul(mcs, ps->mcs_table);
if (ps->pusch_Config->tp_pi2BPSK
&& ((ps->mcs_table == 3 && mcs < 2) || (ps->mcs_table == 4 && mcs < 6))) {
sched_pusch->R >>= 1;
sched_pusch->Qm <<= 1;
}
sched_pusch->tb_size = nr_compute_tbs(sched_pusch->Qm,
sched_pusch->R,
sched_pusch->rbSize,
ps->nrOfSymbols,
ps->N_PRB_DMRS * ps->num_dmrs_symb,
0, // nb_rb_oh
0,
1 /* NrOfLayers */)
>> 3;
/* mark the corresponding RBs as used */
for (int rb = 0; rb < sched_ctrl->sched_pusch.rbSize; rb++)
vrb_map_UL[rb + sched_ctrl->sched_pusch.rbStart] = 1;
}
void nr_schedule_ulsch(module_id_t module_id,
frame_t frame,
sub_frame_t slot,
int num_slots_per_tdd,
int ul_slots,
uint64_t ulsch_in_slot_bitmap) {
RC.nrmac[module_id]->pre_processor_ul(
module_id, frame, slot, num_slots_per_tdd, ulsch_in_slot_bitmap);
NR_ServingCellConfigCommon_t *scc = RC.nrmac[module_id]->common_channels[0].ServingCellConfigCommon;
NR_UE_info_t *UE_info = &RC.nrmac[module_id]->UE_info;
const NR_UE_list_t *UE_list = &UE_info->list;
for (int UE_id = UE_list->head; UE_id >= 0; UE_id = UE_list->next[UE_id]) {
NR_UE_sched_ctrl_t *sched_ctrl = &UE_info->UE_sched_ctrl[UE_id];
/* dynamic PUSCH values (RB alloc, MCS, hence R, Qm, TBS) that change in
* every TTI are pre-populated by the preprocessor and used below */
NR_sched_pusch_t *sched_pusch = &sched_ctrl->sched_pusch;
if (sched_pusch->rbSize <= 0)
continue;
uint16_t rnti = UE_info->rnti[UE_id];
int8_t harq_id = select_ul_harq_pid(sched_ctrl);
if (harq_id < 0) return;
NR_UE_ul_harq_t *cur_harq = &sched_ctrl->ul_harq_processes[harq_id];
cur_harq->state = ACTIVE_SCHED;
cur_harq->last_tx_slot = sched_pusch->slot;
int rnti_types[2] = { NR_RNTI_C, 0 };
/* pre-computed PUSCH values that only change if time domain allocation,
* DCI format, or DMRS parameters change. Updated in the preprocessor
* through nr_save_pusch_fields() */
NR_sched_pusch_save_t *ps = &sched_ctrl->pusch_save;
/* Statistics */
UE_info->mac_stats[UE_id].ulsch_rounds[cur_harq->round]++;
if (cur_harq->round == 0) {
UE_info->mac_stats[UE_id].ulsch_total_bytes_scheduled += sched_pusch->tb_size;
} else {
LOG_W(MAC,
"%d.%2d UL retransmission RNTI %04x sched %d.%2d HARQ PID %d round %d NDI %d\n",
frame,
slot,
rnti,
sched_pusch->frame,
sched_pusch->slot,
harq_id,
cur_harq->round,
cur_harq->ndi);
}
LOG_D(MAC,
"%4d.%2d RNTI %04x UL sched %4d.%2d start %d RBS %d MCS %d TBS %d HARQ PID %d round %d NDI %d\n",
frame,
slot,
rnti,
sched_pusch->frame,
sched_pusch->slot,
sched_pusch->rbStart,
sched_pusch->rbSize,
sched_pusch->mcs,
sched_pusch->tb_size,
harq_id,
cur_harq->round,
cur_harq->ndi);
/* PUSCH in a later slot, but corresponding DCI now! */
nfapi_nr_ul_tti_request_t *future_ul_tti_req = &RC.nrmac[module_id]->UL_tti_req_ahead[0][sched_pusch->slot];
AssertFatal(future_ul_tti_req->SFN == sched_pusch->frame
&& future_ul_tti_req->Slot == sched_pusch->slot,
"%d.%d future UL_tti_req's frame.slot %d.%d does not match PUSCH %d.%d\n",
frame, slot,
future_ul_tti_req->SFN,
future_ul_tti_req->Slot,
sched_pusch->frame,
sched_pusch->slot);
future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pdu_type = NFAPI_NR_UL_CONFIG_PUSCH_PDU_TYPE;
future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pdu_size = sizeof(nfapi_nr_pusch_pdu_t);
nfapi_nr_pusch_pdu_t *pusch_pdu = &future_ul_tti_req->pdus_list[future_ul_tti_req->n_pdus].pusch_pdu;
memset(pusch_pdu, 0, sizeof(nfapi_nr_pusch_pdu_t));
future_ul_tti_req->n_pdus += 1;
LOG_D(MAC, "%4d.%2d Scheduling UE specific PUSCH\n", frame, slot);
pusch_pdu->pdu_bit_map = PUSCH_PDU_BITMAP_PUSCH_DATA;
pusch_pdu->rnti = rnti;
pusch_pdu->handle = 0; //not yet used
/* FAPI: BWP */
pusch_pdu->bwp_size = NRRIV2BW(sched_ctrl->active_ubwp->bwp_Common->genericParameters.locationAndBandwidth,275);
pusch_pdu->bwp_start = NRRIV2PRBOFFSET(sched_ctrl->active_ubwp->bwp_Common->genericParameters.locationAndBandwidth,275);
pusch_pdu->subcarrier_spacing = sched_ctrl->active_ubwp->bwp_Common->genericParameters.subcarrierSpacing;
pusch_pdu->cyclic_prefix = 0;
/* FAPI: PUSCH information always included */
pusch_pdu->target_code_rate = sched_pusch->R;
pusch_pdu->qam_mod_order = sched_pusch->Qm;
pusch_pdu->mcs_index = sched_pusch->mcs;
pusch_pdu->mcs_table = ps->mcs_table;
pusch_pdu->transform_precoding = ps->transform_precoding;
if (ps->pusch_Config->dataScramblingIdentityPUSCH)
pusch_pdu->data_scrambling_id = *ps->pusch_Config->dataScramblingIdentityPUSCH;
else
pusch_pdu->data_scrambling_id = *scc->physCellId;
pusch_pdu->nrOfLayers = 1;
/* FAPI: DMRS */
pusch_pdu->ul_dmrs_symb_pos = ps->ul_dmrs_symb_pos;
pusch_pdu->dmrs_config_type = ps->dmrs_config_type;
if (pusch_pdu->transform_precoding) { // transform precoding disabled
long *scramblingid;
if (pusch_pdu->scid == 0)
scramblingid = ps->NR_DMRS_UplinkConfig->transformPrecodingDisabled->scramblingID0;
else
scramblingid = ps->NR_DMRS_UplinkConfig->transformPrecodingDisabled->scramblingID1;
if (scramblingid == NULL)
pusch_pdu->ul_dmrs_scrambling_id = *scc->physCellId;
else
pusch_pdu->ul_dmrs_scrambling_id = *scramblingid;
}
else {
pusch_pdu->ul_dmrs_scrambling_id = *scc->physCellId;
if (ps->NR_DMRS_UplinkConfig->transformPrecodingEnabled->nPUSCH_Identity != NULL)
pusch_pdu->pusch_identity = *ps->NR_DMRS_UplinkConfig->transformPrecodingEnabled->nPUSCH_Identity;
else
pusch_pdu->pusch_identity = *scc->physCellId;
}
pusch_pdu->scid = 0; // DMRS sequence initialization [TS38.211, sec 6.4.1.1.1]
pusch_pdu->num_dmrs_cdm_grps_no_data = ps->num_dmrs_cdm_grps_no_data;
pusch_pdu->dmrs_ports = 1;
/* FAPI: Pusch Allocation in frequency domain */
AssertFatal(ps->pusch_Config->resourceAllocation == NR_PUSCH_Config__resourceAllocation_resourceAllocationType1,
"Only frequency resource allocation type 1 is currently supported\n");
pusch_pdu->resource_alloc = 1; //type 1
pusch_pdu->rb_start = sched_pusch->rbStart;
pusch_pdu->rb_size = sched_pusch->rbSize;
pusch_pdu->vrb_to_prb_mapping = 0;
if (ps->pusch_Config->frequencyHopping==NULL)
pusch_pdu->frequency_hopping = 0;
else
pusch_pdu->frequency_hopping = 1;
/* FAPI: Resource Allocation in time domain */
pusch_pdu->start_symbol_index = ps->startSymbolIndex;
pusch_pdu->nr_of_symbols = ps->nrOfSymbols;
/* PUSCH PDU */
pusch_pdu->pusch_data.rv_index = nr_rv_round_map[cur_harq->round];
pusch_pdu->pusch_data.harq_process_id = harq_id;
pusch_pdu->pusch_data.new_data_indicator = cur_harq->ndi;
pusch_pdu->pusch_data.tb_size = sched_pusch->tb_size;
pusch_pdu->pusch_data.num_cb = 0; //CBG not supported
/* PUSCH PTRS */
if (ps->NR_DMRS_UplinkConfig->phaseTrackingRS != NULL) {
// TODO to be fixed from RRC config
uint8_t ptrs_mcs1 = 2; // higher layer parameter in PTRS-UplinkConfig
uint8_t ptrs_mcs2 = 4; // higher layer parameter in PTRS-UplinkConfig
uint8_t ptrs_mcs3 = 10; // higher layer parameter in PTRS-UplinkConfig
uint16_t n_rb0 = 25; // higher layer parameter in PTRS-UplinkConfig
uint16_t n_rb1 = 75; // higher layer parameter in PTRS-UplinkConfig
pusch_pdu->pusch_ptrs.ptrs_time_density = get_L_ptrs(ptrs_mcs1, ptrs_mcs2, ptrs_mcs3, pusch_pdu->mcs_index, pusch_pdu->mcs_table);
pusch_pdu->pusch_ptrs.ptrs_freq_density = get_K_ptrs(n_rb0, n_rb1, pusch_pdu->rb_size);
pusch_pdu->pusch_ptrs.ptrs_ports_list = (nfapi_nr_ptrs_ports_t *) malloc(2*sizeof(nfapi_nr_ptrs_ports_t));
pusch_pdu->pusch_ptrs.ptrs_ports_list[0].ptrs_re_offset = 0;
pusch_pdu->pdu_bit_map |= PUSCH_PDU_BITMAP_PUSCH_PTRS; // enable PUSCH PTRS
}
else{
pusch_pdu->pdu_bit_map &= ~PUSCH_PDU_BITMAP_PUSCH_PTRS; // disable PUSCH PTRS
}
nfapi_nr_ul_dci_request_t *ul_dci_req = &RC.nrmac[module_id]->UL_dci_req[0];
ul_dci_req->SFN = frame;
ul_dci_req->Slot = slot;
nfapi_nr_ul_dci_request_pdus_t *ul_dci_request_pdu = &ul_dci_req->ul_dci_pdu_list[ul_dci_req->numPdus];
memset(ul_dci_request_pdu, 0, sizeof(nfapi_nr_ul_dci_request_pdus_t));
ul_dci_request_pdu->PDUType = NFAPI_NR_DL_TTI_PDCCH_PDU_TYPE;
ul_dci_request_pdu->PDUSize = (uint8_t)(2+sizeof(nfapi_nr_dl_tti_pdcch_pdu));
nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu_rel15 = &ul_dci_request_pdu->pdcch_pdu.pdcch_pdu_rel15;
ul_dci_req->numPdus += 1;
LOG_D(MAC,"Configuring ULDCI/PDCCH in %d.%d\n", frame,slot);
nr_configure_pdcch(RC.nrmac[0],
pdcch_pdu_rel15,
rnti,
sched_ctrl->search_space,
sched_ctrl->coreset,
scc,
sched_ctrl->active_bwp,
sched_ctrl->aggregation_level,
sched_ctrl->cce_index);
dci_pdu_rel15_t dci_pdu_rel15[MAX_DCI_CORESET];
memset(dci_pdu_rel15, 0, sizeof(dci_pdu_rel15));
NR_CellGroupConfig_t *secondaryCellGroup = UE_info->secondaryCellGroup[UE_id];
const int n_ubwp = secondaryCellGroup->spCellConfig->spCellConfigDedicated->uplinkConfig->uplinkBWP_ToAddModList->list.count;
// NOTE: below functions assume that dci_formats is an array corresponding
// to all UL DCIs in the PDCCH, but for us it is a simple int. So before
// having multiple UEs, the below need to be changed (IMO the functions
// should fill for one DCI only and not handle all of them).
config_uldci(sched_ctrl->active_ubwp,
pusch_pdu,
pdcch_pdu_rel15,
&dci_pdu_rel15[0],
&ps->dci_format,
ps->time_domain_allocation,
UE_info->UE_sched_ctrl[UE_id].tpc0,
n_ubwp,
sched_ctrl->active_bwp->bwp_Id);
fill_dci_pdu_rel15(scc,
secondaryCellGroup,
pdcch_pdu_rel15,
dci_pdu_rel15,
&ps->dci_format,
rnti_types,
pusch_pdu->bwp_size,
sched_ctrl->active_bwp->bwp_Id);
memset(sched_pusch, 0, sizeof(*sched_pusch));
}
} }
...@@ -88,6 +88,20 @@ void nr_simple_dlsch_preprocessor(module_id_t module_id, ...@@ -88,6 +88,20 @@ void nr_simple_dlsch_preprocessor(module_id_t module_id,
void schedule_nr_mib(module_id_t module_idP, frame_t frameP, sub_frame_t subframeP, uint8_t slots_per_frame); void schedule_nr_mib(module_id_t module_idP, frame_t frameP, sub_frame_t subframeP, uint8_t slots_per_frame);
/// uplink scheduler
void nr_schedule_ulsch(module_id_t module_id,
frame_t frame,
sub_frame_t slot,
int num_slots_per_tdd,
int ul_slots,
uint64_t ulsch_in_slot_bitmap);
void nr_simple_ulsch_preprocessor(module_id_t module_id,
frame_t frame,
sub_frame_t slot,
int num_slots_per_tdd,
uint64_t ulsch_in_slot_bitmap);
/////// Random Access MAC-PHY interface functions and primitives /////// /////// Random Access MAC-PHY interface functions and primitives ///////
void nr_schedule_RA(module_id_t module_idP, frame_t frameP, sub_frame_t slotP); void nr_schedule_RA(module_id_t module_idP, frame_t frameP, sub_frame_t slotP);
...@@ -107,7 +121,9 @@ void nr_clear_ra_proc(module_id_t module_idP, int CC_id, frame_t frameP); ...@@ -107,7 +121,9 @@ void nr_clear_ra_proc(module_id_t module_idP, int CC_id, frame_t frameP);
int nr_allocate_CCEs(int module_idP, int CC_idP, frame_t frameP, sub_frame_t slotP, int test_only); int nr_allocate_CCEs(int module_idP, int CC_idP, frame_t frameP, sub_frame_t slotP, int test_only);
void nr_get_Msg3alloc(NR_ServingCellConfigCommon_t *scc, void nr_get_Msg3alloc(module_id_t module_id,
int CC_id,
NR_ServingCellConfigCommon_t *scc,
NR_BWP_Uplink_t *ubwp, NR_BWP_Uplink_t *ubwp,
sub_frame_t current_subframe, sub_frame_t current_subframe,
frame_t current_frame, frame_t current_frame,
...@@ -136,6 +152,13 @@ void nr_preprocessor_phytest(module_id_t module_id, ...@@ -136,6 +152,13 @@ void nr_preprocessor_phytest(module_id_t module_id,
frame_t frame, frame_t frame,
sub_frame_t slot, sub_frame_t slot,
int num_slots_per_tdd); int num_slots_per_tdd);
/* \brief UL preprocessor for phytest: schedules UE_id 0 with fixed MCS on a
* fixed set of resources */
void nr_ul_preprocessor_phytest(module_id_t module_id,
frame_t frame,
sub_frame_t slot,
int num_slots_per_tdd,
uint64_t ulsch_in_slot_bitmap);
void nr_schedule_css_dlsch_phytest(module_id_t module_idP, void nr_schedule_css_dlsch_phytest(module_id_t module_idP,
frame_t frameP, frame_t frameP,
...@@ -166,17 +189,10 @@ void config_uldci(NR_BWP_Uplink_t *ubwp, ...@@ -166,17 +189,10 @@ void config_uldci(NR_BWP_Uplink_t *ubwp,
nfapi_nr_pusch_pdu_t *pusch_pdu, nfapi_nr_pusch_pdu_t *pusch_pdu,
nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu_rel15, nfapi_nr_dl_tti_pdcch_pdu_rel15_t *pdcch_pdu_rel15,
dci_pdu_rel15_t *dci_pdu_rel15, dci_pdu_rel15_t *dci_pdu_rel15,
int *dci_formats, int *rnti_types, int *dci_formats,
int time_domain_assignment, uint8_t tpc, int time_domain_assignment, uint8_t tpc,
int n_ubwp, int bwp_id); int n_ubwp, int bwp_id);
void nr_schedule_pusch(int Mod_idP,
int UE_id,
int num_slots_per_tdd,
int ul_slots,
frame_t frameP,
sub_frame_t slotP);
void nr_schedule_pucch(int Mod_idP, void nr_schedule_pucch(int Mod_idP,
int UE_id, int UE_id,
int nr_ulmix_slots, int nr_ulmix_slots,
...@@ -279,6 +295,15 @@ void find_aggregation_candidates(uint8_t *aggregation_level, ...@@ -279,6 +295,15 @@ void find_aggregation_candidates(uint8_t *aggregation_level,
uint8_t *nr_of_candidates, uint8_t *nr_of_candidates,
NR_SearchSpace_t *ss); NR_SearchSpace_t *ss);
long get_K2(NR_BWP_Uplink_t *ubwp, int time_domain_assignment, int mu);
void nr_save_pusch_fields(const NR_ServingCellConfigCommon_t *scc,
const NR_BWP_Uplink_t *ubwp,
long dci_format,
int tda,
uint8_t num_dmrs_cdm_grps_no_data,
NR_sched_pusch_save_t *ps);
uint8_t nr_get_tpc(int target, uint8_t cqi, int incr); uint8_t nr_get_tpc(int target, uint8_t cqi, int incr);
int get_spf(nfapi_nr_config_request_scf_t *cfg); int get_spf(nfapi_nr_config_request_scf_t *cfg);
...@@ -353,16 +378,6 @@ void nr_generate_Msg2(module_id_t module_idP, ...@@ -353,16 +378,6 @@ void nr_generate_Msg2(module_id_t module_idP,
frame_t frameP, frame_t frameP,
sub_frame_t slotP); sub_frame_t slotP);
void nr_schedule_reception_msg3(module_id_t module_idP, int CC_id, frame_t frameP, sub_frame_t slotP);
void schedule_fapi_ul_pdu(int Mod_idP,
frame_t frameP,
sub_frame_t slotP,
int num_slots_per_tdd,
int ul_slots,
int time_domain_assignment,
uint64_t ulsch_in_slot_bitmap);
void nr_process_mac_pdu( void nr_process_mac_pdu(
module_id_t module_idP, module_id_t module_idP,
rnti_t rnti, rnti_t rnti,
......
...@@ -82,10 +82,13 @@ void mac_top_init_gNB(void) ...@@ -82,10 +82,13 @@ void mac_top_init_gNB(void)
RC.nrmac[i]->ul_handle = 0; RC.nrmac[i]->ul_handle = 0;
if (get_softmodem_params()->phy_test) if (get_softmodem_params()->phy_test) {
RC.nrmac[i]->pre_processor_dl = nr_preprocessor_phytest; RC.nrmac[i]->pre_processor_dl = nr_preprocessor_phytest;
else RC.nrmac[i]->pre_processor_ul = nr_ul_preprocessor_phytest;
} else {
RC.nrmac[i]->pre_processor_dl = nr_simple_dlsch_preprocessor; RC.nrmac[i]->pre_processor_dl = nr_simple_dlsch_preprocessor;
RC.nrmac[i]->pre_processor_ul = nr_simple_ulsch_preprocessor;
}
}//END for (i = 0; i < RC.nb_nr_macrlc_inst; i++) }//END for (i = 0; i < RC.nb_nr_macrlc_inst; i++)
......
...@@ -142,8 +142,6 @@ typedef struct { ...@@ -142,8 +142,6 @@ typedef struct {
uint8_t msg3_cqireq; uint8_t msg3_cqireq;
/// Round of Msg3 HARQ /// Round of Msg3 HARQ
uint8_t msg3_round; uint8_t msg3_round;
/// Msg3 pusch pdu
nfapi_nr_pusch_pdu_t pusch_pdu;
/// TBS used for Msg4 /// TBS used for Msg4
int msg4_TBsize; int msg4_TBsize;
/// MCS used for Msg4 /// MCS used for Msg4
...@@ -192,8 +190,9 @@ typedef struct { ...@@ -192,8 +190,9 @@ typedef struct {
NR_RA_t ra[NR_NB_RA_PROC_MAX]; NR_RA_t ra[NR_NB_RA_PROC_MAX];
/// VRB map for common channels /// VRB map for common channels
uint16_t vrb_map[275]; uint16_t vrb_map[275];
/// VRB map for common channels and retransmissions by PHICH /// VRB map for common channels and PUSCH, dynamically allocated because
uint16_t vrb_map_UL[275]; /// length depends on number of slots and RBs
uint16_t *vrb_map_UL;
/// number of subframe allocation pattern available for MBSFN sync area /// number of subframe allocation pattern available for MBSFN sync area
uint8_t num_sf_allocation_pattern; uint8_t num_sf_allocation_pattern;
///Number of active SSBs ///Number of active SSBs
...@@ -285,12 +284,49 @@ typedef struct NR_sched_pucch { ...@@ -285,12 +284,49 @@ typedef struct NR_sched_pucch {
uint8_t resource_indicator; uint8_t resource_indicator;
} NR_sched_pucch; } NR_sched_pucch;
/* this struct is a helper: as long as the TDA and DCI format remain the same
* over the same uBWP and search space, there is no need to recalculate all
* S/L, MCS table, or DMRS-related parameters over and over again. Hence, we
* store them in this struct for easy reference. */
typedef struct NR_sched_pusch_save {
int dci_format;
int time_domain_allocation;
uint8_t num_dmrs_cdm_grps_no_data;
int startSymbolIndex;
int nrOfSymbols;
NR_PUSCH_Config_t *pusch_Config;
uint8_t transform_precoding;
uint8_t mcs_table;
long mapping_type;
NR_DMRS_UplinkConfig_t *NR_DMRS_UplinkConfig;
uint16_t dmrs_config_type;
uint16_t ul_dmrs_symb_pos;
uint8_t num_dmrs_symb;
uint8_t N_PRB_DMRS;
} NR_sched_pusch_save_t;
typedef struct NR_sched_pusch { typedef struct NR_sched_pusch {
int frame; int frame;
int slot; int slot;
bool active;
nfapi_nr_pusch_pdu_t pusch_pdu; /// RB allocation within active uBWP
} NR_sched_pusch; uint16_t rbSize;
uint16_t rbStart;
// time-domain allocation for scheduled RBs
int time_domain_allocation;
/// MCS
uint8_t mcs;
/// TBS-related info
uint16_t R;
uint8_t Qm;
uint32_t tb_size;
} NR_sched_pusch_t;
typedef struct NR_UE_harq { typedef struct NR_UE_harq {
uint8_t is_waiting; uint8_t is_waiting;
...@@ -348,11 +384,15 @@ typedef struct { ...@@ -348,11 +384,15 @@ typedef struct {
/// the currently active BWP in DL /// the currently active BWP in DL
NR_BWP_Downlink_t *active_bwp; NR_BWP_Downlink_t *active_bwp;
/// the currently active BWP in UL
NR_BWP_Uplink_t *active_ubwp;
NR_sched_pucch **sched_pucch; NR_sched_pucch **sched_pucch;
/// selected PUCCH index, if scheduled /// selected PUCCH index, if scheduled
int pucch_sched_idx; int pucch_sched_idx;
int pucch_occ_idx; int pucch_occ_idx;
NR_sched_pusch *sched_pusch; NR_sched_pusch_save_t pusch_save;
NR_sched_pusch_t sched_pusch;
/// CCE index and aggregation, should be coherent with cce_list /// CCE index and aggregation, should be coherent with cce_list
NR_SearchSpace_t *search_space; NR_SearchSpace_t *search_space;
...@@ -381,7 +421,6 @@ typedef struct { ...@@ -381,7 +421,6 @@ typedef struct {
uint8_t tpc0; uint8_t tpc0;
uint8_t tpc1; uint8_t tpc1;
uint16_t ul_rssi; uint16_t ul_rssi;
uint8_t current_harq_pid;
NR_UE_harq_t harq_processes[NR_MAX_NB_HARQ_PROCESSES]; NR_UE_harq_t harq_processes[NR_MAX_NB_HARQ_PROCESSES];
NR_UE_ul_harq_t ul_harq_processes[NR_MAX_NB_HARQ_PROCESSES]; NR_UE_ul_harq_t ul_harq_processes[NR_MAX_NB_HARQ_PROCESSES];
int dummy; int dummy;
...@@ -441,6 +480,11 @@ typedef void (*nr_pp_impl_dl)(module_id_t mod_id, ...@@ -441,6 +480,11 @@ typedef void (*nr_pp_impl_dl)(module_id_t mod_id,
frame_t frame, frame_t frame,
sub_frame_t slot, sub_frame_t slot,
int num_slots_per_tdd); int num_slots_per_tdd);
typedef void (*nr_pp_impl_ul)(module_id_t mod_id,
frame_t frame,
sub_frame_t slot,
int num_slots_per_tdd,
uint64_t ulsch_in_slot_bitmap);
/*! \brief top level eNB MAC structure */ /*! \brief top level eNB MAC structure */
typedef struct gNB_MAC_INST_s { typedef struct gNB_MAC_INST_s {
...@@ -467,8 +511,12 @@ typedef struct gNB_MAC_INST_s { ...@@ -467,8 +511,12 @@ typedef struct gNB_MAC_INST_s {
nfapi_nr_config_request_scf_t config[NFAPI_CC_MAX]; nfapi_nr_config_request_scf_t config[NFAPI_CC_MAX];
/// NFAPI DL Config Request Structure /// NFAPI DL Config Request Structure
nfapi_nr_dl_tti_request_t DL_req[NFAPI_CC_MAX]; nfapi_nr_dl_tti_request_t DL_req[NFAPI_CC_MAX];
/// NFAPI UL TTI Request Structure (this is from the new SCF specs) /// NFAPI UL TTI Request Structure, simple pointer into structure
nfapi_nr_ul_tti_request_t UL_tti_req[NFAPI_CC_MAX]; /// UL_tti_req_ahead for current frame/slot
nfapi_nr_ul_tti_request_t *UL_tti_req[NFAPI_CC_MAX];
/// NFAPI UL TTI Request Structure for future TTIs, dynamically allocated
/// because length depends on number of slots
nfapi_nr_ul_tti_request_t *UL_tti_req_ahead[NFAPI_CC_MAX];
/// NFAPI HI/DCI0 Config Request Structure /// NFAPI HI/DCI0 Config Request Structure
nfapi_nr_ul_dci_request_t UL_dci_req[NFAPI_CC_MAX]; nfapi_nr_ul_dci_request_t UL_dci_req[NFAPI_CC_MAX];
/// NFAPI DL PDU structure /// NFAPI DL PDU structure
...@@ -502,11 +550,11 @@ typedef struct gNB_MAC_INST_s { ...@@ -502,11 +550,11 @@ typedef struct gNB_MAC_INST_s {
time_stats_t schedule_pch; time_stats_t schedule_pch;
/// CCE lists /// CCE lists
int cce_list[MAX_NUM_BWP][MAX_NUM_CORESET][MAX_NUM_CCE]; int cce_list[MAX_NUM_BWP][MAX_NUM_CORESET][MAX_NUM_CCE];
/// current slot
int current_slot;
/// DL preprocessor for differentiated scheduling /// DL preprocessor for differentiated scheduling
nr_pp_impl_dl pre_processor_dl; nr_pp_impl_dl pre_processor_dl;
/// UL preprocessor for differentiated scheduling
nr_pp_impl_ul pre_processor_ul;
} gNB_MAC_INST; } gNB_MAC_INST;
#endif /*__LAYER2_NR_MAC_GNB_H__ */ #endif /*__LAYER2_NR_MAC_GNB_H__ */
...@@ -216,8 +216,6 @@ void NR_UL_indication(NR_UL_IND_t *UL_info) { ...@@ -216,8 +216,6 @@ void NR_UL_indication(NR_UL_IND_t *UL_info) {
ifi->CC_mask |= (1<<CC_id); ifi->CC_mask |= (1<<CC_id);
} }
// clear DL/UL info for new scheduling round
clear_nr_nfapi_information(mac,CC_id,UL_info->frame,UL_info->slot);
handle_nr_rach(UL_info); handle_nr_rach(UL_info);
handle_nr_uci(UL_info,&mac->UE_info.UE_sched_ctrl[0],&mac->UE_info.mac_stats[0],mac->pucch_target_snrx10); handle_nr_uci(UL_info,&mac->UE_info.UE_sched_ctrl[0],&mac->UE_info.mac_stats[0],mac->pucch_target_snrx10);
...@@ -246,7 +244,7 @@ void NR_UL_indication(NR_UL_IND_t *UL_info) { ...@@ -246,7 +244,7 @@ void NR_UL_indication(NR_UL_IND_t *UL_info) {
sched_info->DL_req = &mac->DL_req[CC_id]; sched_info->DL_req = &mac->DL_req[CC_id];
sched_info->UL_dci_req = &mac->UL_dci_req[CC_id]; sched_info->UL_dci_req = &mac->UL_dci_req[CC_id];
sched_info->UL_tti_req = &mac->UL_tti_req[CC_id]; sched_info->UL_tti_req = mac->UL_tti_req[CC_id];
sched_info->TX_req = &mac->TX_req[CC_id]; sched_info->TX_req = &mac->TX_req[CC_id];
#ifdef DUMP_FAPI #ifdef DUMP_FAPI
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment