Commit 35276a7a authored by sujathabanoth-xlnx's avatar sujathabanoth-xlnx

QDMA DPDK and Linux Driver 2022.1 Release

QDMA DPDK and Linux Driver 2022.1 Release
parent 78599578
RELEASE: 2020.2.1
=================
RELEASE: 2022.1
===============
This release is based on DPDK v20.11 and contains QDMA poll mode driver and
QDMA test application.
This release is validated for VCU1525 and U200 devices on QDMA4.0 2020.2 based example design
and QDMA3.1 2020.2 based example design.
This release is validated for
- On VCU1525 for QDMA4.0 2020.2 example design
- On XCVP1202 for CPM5 2022.1 example design
This release includes a patch file for dpdk-pktgen v20.12.0 that extends
dpdk-pktgen application to handle packets with packet sizes more than 1518 bytes
and it disables the packet size classification logic in dpdk-pktgen to remove
application overhead in performance measurement.
application overhead in performance measurement.This patch is used for
performance testing with dpdk-pktgen application.
This patch is used for performance testing with dpdk-pktgen application.
The driver is validated against dpdk-pktgen and testpmd applications for API compliance.
SUPPORTED FEATURES:
......@@ -89,12 +90,63 @@ SUPPORTED FEATURES:
----------------
- Migrated qdma dpdk driver to use DPDK framework v20.11
KNOWN ISSUES:
=============
- Function Level Reset(FLR) of PF device when VFs are attached to this PF results in mailbox communication failure
- DPDK C2H and Forwarding performance values for 8 queue is lesser compared to 4 queue case for both PF and VF.
2022.1 Updates
--------------
CPM5
- FMAP context dump
- Debug register dump for ST and MM Errors
- Dual Instance support
KNOWN ISSUE:
============
- CPM5 Only
- Sufficient host memory is required to accommodate 4K queues. Tested only upto 2048 queues with our test environment though driver supports 4K queues.
- Tandem Boot support not available completely
- All Designs
- Function Level Reset(FLR) of PF device when VFs are attached to this PF results in mailbox communication failure
- DPDK C2H and Forwarding performance values for 8 queue is lesser compared to 4 queue case for both PF and VF.
DRIVER LIMITATIONS:
===================
- Big endian systems are not supported
- For optimal QDMA streaming performance, packet buffers of the descriptor ring should be aligned to at least 256 bytes.
\ No newline at end of file
- CPM5 Only
- VF functionality is verified with 240 VF's as per CPM5 HW limitation
- All Designs
- Big endian systems are not supported
- For optimal QDMA streaming performance, packet buffers of the descriptor ring should be aligned to at least 256 bytes.
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
# BSD LICENSE
#
# Copyright(c) 2021 Xilinx, Inc. All rights reserved.
# Copyright(c) 2021-2022 Xilinx, Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
......@@ -35,9 +35,10 @@ includes += include_directories('.')
includes += include_directories('qdma_access')
includes += include_directories('qdma_access/qdma_soft_access')
includes += include_directories('qdma_access/eqdma_soft_access')
includes += include_directories('qdma_access/qdma_s80_hard_access')
includes += include_directories('qdma_access/qdma_cpm4_access')
includes += include_directories('qdma_access/eqdma_cpm5_access')
headers = files('rte_pmd_qdma.h')
headers += files('rte_pmd_qdma.h')
deps += ['mempool_ring']
......@@ -51,8 +52,10 @@ sources = files(
'qdma_user.c',
'qdma_access/eqdma_soft_access/eqdma_soft_access.c',
'qdma_access/eqdma_soft_access/eqdma_soft_reg_dump.c',
'qdma_access/qdma_s80_hard_access/qdma_s80_hard_access.c',
'qdma_access/qdma_s80_hard_access/qdma_s80_hard_reg_dump.c',
'qdma_access/qdma_cpm4_access/qdma_cpm4_access.c',
'qdma_access/qdma_cpm4_access/qdma_cpm4_reg_dump.c',
'qdma_access/eqdma_cpm5_access/eqdma_cpm5_access.c',
'qdma_access/eqdma_cpm5_access/eqdma_cpm5_reg_dump.c',
'qdma_access/qdma_soft_access/qdma_soft_access.c',
'qdma_access/qdma_list.c',
'qdma_access/qdma_resource_mgmt.c',
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -220,7 +220,7 @@ struct qdma_rx_queue {
/**< pend_pkt_avg_thr_lo: lower average threshold */
unsigned int pend_pkt_avg_thr_lo;
/**< sorted_c2h_cntr_idx: sorted c2h counter index */
unsigned char sorted_c2h_cntr_idx;
int8_t sorted_c2h_cntr_idx;
/**< c2h_cntr_monitor_cnt: c2h counter stagnant monitor count */
unsigned char c2h_cntr_monitor_cnt;
#endif //QDMA_LATENCY_OPTIMIZED
......@@ -285,7 +285,7 @@ struct qdma_pci_dev {
/* Driver Attributes */
uint32_t qsets_en; /* no. of queue pairs enabled */
uint32_t queue_base;
uint8_t func_id; /* Function id */
uint16_t func_id; /* Function id */
/* DMA identifier used by the resource manager
* for the DMA instances used by this driver
......@@ -298,6 +298,9 @@ struct qdma_pci_dev {
uint8_t cmpt_desc_len;
uint8_t c2h_bypass_mode;
uint8_t h2c_bypass_mode;
#ifdef TANDEM_BOOT_SUPPORTED
uint8_t en_st_mode;
#endif
uint8_t trigger_mode;
uint8_t timer_count;
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......@@ -59,6 +59,7 @@ enum eqdma_error_idx {
EQDMA_DSC_ERR_RQ_CANCEL,
EQDMA_DSC_ERR_DBE,
EQDMA_DSC_ERR_SBE,
EQDMA_DSC_ERR_PORT_ID,
EQDMA_DSC_ERR_ALL,
/* TRQ Errors */
......@@ -87,6 +88,7 @@ enum eqdma_error_idx {
EQDMA_ST_C2H_ERR_AVL_RING_DSC,
EQDMA_ST_C2H_ERR_HDR_ECC_UNC,
EQDMA_ST_C2H_ERR_HDR_ECC_COR,
EQDMA_ST_C2H_ERR_WRB_PORT_ID_ERR,
EQDMA_ST_C2H_ERR_ALL,
/* Fatal Errors */
......@@ -197,6 +199,27 @@ enum eqdma_error_idx {
EQDMA_DBE_ERR_RC_RRQ_ODD_RAM,
EQDMA_DBE_ERR_ALL,
/* MM C2H Errors */
EQDMA_MM_C2H_WR_SLR_ERR,
EQDMA_MM_C2H_RD_SLR_ERR,
EQDMA_MM_C2H_WR_FLR_ERR,
EQDMA_MM_C2H_UR_ERR,
EQDMA_MM_C2H_WR_UC_RAM_ERR,
EQDMA_MM_C2H_ERR_ALL,
/* MM H2C Engine0 Errors */
EQDMA_MM_H2C0_RD_HDR_POISON_ERR,
EQDMA_MM_H2C0_RD_UR_CA_ERR,
EQDMA_MM_H2C0_RD_HDR_BYTE_ERR,
EQDMA_MM_H2C0_RD_HDR_PARAM_ERR,
EQDMA_MM_H2C0_RD_HDR_ADR_ERR,
EQDMA_MM_H2C0_RD_FLR_ERR,
EQDMA_MM_H2C0_RD_DAT_POISON_ERR,
EQDMA_MM_H2C0_RD_RQ_DIS_ERR,
EQDMA_MM_H2C0_WR_DEC_ERR,
EQDMA_MM_H2C0_WR_SLV_ERR,
EQDMA_MM_H2C0_ERR_ALL,
EQDMA_ERRS_ALL
};
......@@ -245,6 +268,10 @@ int eqdma_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
struct qdma_descq_cmpt_ctxt *ctxt,
enum qdma_hw_access_type access_type);
int eqdma_fmap_conf(void *dev_hndl, uint16_t func_id,
struct qdma_fmap_cfg *config,
enum qdma_hw_access_type access_type);
int eqdma_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
struct qdma_indirect_intr_ctxt *ctxt,
enum qdma_hw_access_type access_type);
......@@ -273,6 +300,7 @@ const char *eqdma_hw_get_error_name(uint32_t err_idx);
int eqdma_hw_error_enable(void *dev_hndl, uint32_t err_idx);
int eqdma_read_dump_queue_context(void *dev_hndl,
uint16_t func_id,
uint16_t qid_hw,
uint8_t st,
enum qdma_dev_q_type q_type,
......@@ -282,7 +310,7 @@ int eqdma_get_device_attributes(void *dev_hndl,
struct qdma_dev_attributes *dev_info);
int eqdma_get_user_bar(void *dev_hndl, uint8_t is_vf,
uint8_t func_id, uint8_t *user_bar);
uint16_t func_id, uint8_t *user_bar);
int eqdma_dump_config_reg_list(void *dev_hndl,
uint32_t total_regs,
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......@@ -92,7 +92,8 @@ static inline uint32_t get_trailing_zeros(uint64_t value)
/* CSR Default values */
#define DEFAULT_MAX_DSC_FETCH 6
#define DEFAULT_WRB_INT QDMA_WRB_INTERVAL_128
#define DEFAULT_PFCH_STOP_THRESH 256
/* Default values for 0xB08 */
#define DEFAULT_PFCH_NUM_ENTRIES_PER_Q 8
#define DEFAULT_PFCH_MAX_Q_CNT 16
#define DEFAULT_C2H_INTR_TIMER_TICK 25
......@@ -633,18 +634,18 @@ int hw_monitor_reg(void *dev_hndl, uint32_t reg, uint32_t mask,
void qdma_memset(void *to, uint8_t val, uint32_t size);
int qdma_acc_reg_dump_buf_len(void *dev_hndl,
enum qdma_ip_type ip_type, int *buflen);
int qdma_acc_reg_dump_buf_len(void *dev_hndl, enum qdma_ip_type ip_type,
enum qdma_device_type device_type, int *buflen);
int qdma_acc_reg_info_len(void *dev_hndl,
enum qdma_ip_type ip_type, int *buflen, int *num_regs);
int qdma_acc_reg_info_len(void *dev_hndl, enum qdma_ip_type ip_type,
enum qdma_device_type device_type, int *buflen, int *num_regs);
int qdma_acc_context_buf_len(void *dev_hndl,
enum qdma_ip_type ip_type, uint8_t st,
int qdma_acc_context_buf_len(void *dev_hndl, enum qdma_ip_type ip_type,
enum qdma_device_type device_type, uint8_t st,
enum qdma_dev_q_type q_type, uint32_t *buflen);
int qdma_acc_get_num_config_regs(void *dev_hndl,
enum qdma_ip_type ip_type, uint32_t *num_regs);
int qdma_acc_get_num_config_regs(void *dev_hndl, enum qdma_ip_type ip_type,
enum qdma_device_type device_type, uint32_t *num_regs);
/*
* struct qdma_hw_access - Structure to hold HW access function pointers
......@@ -700,8 +701,8 @@ struct qdma_hw_access {
int (*qdma_mm_channel_conf)(void *dev_hndl, uint8_t channel,
uint8_t is_c2h, uint8_t enable);
int (*qdma_get_user_bar)(void *dev_hndl, uint8_t is_vf,
uint8_t func_id, uint8_t *user_bar);
int (*qdma_get_function_number)(void *dev_hndl, uint8_t *func_id);
uint16_t func_id, uint8_t *user_bar);
int (*qdma_get_function_number)(void *dev_hndl, uint16_t *func_id);
int (*qdma_get_version)(void *dev_hndl, uint8_t is_vf,
struct qdma_hw_version_info *version_info);
int (*qdma_get_device_attributes)(void *dev_hndl,
......@@ -725,6 +726,7 @@ struct qdma_hw_access {
struct qdma_descq_context *ctxt_data,
char *buf, uint32_t buflen);
int (*qdma_read_dump_queue_context)(void *dev_hndl,
uint16_t func_id,
uint16_t qid_hw,
uint8_t st,
enum qdma_dev_q_type q_type,
......@@ -747,6 +749,9 @@ struct qdma_hw_access {
uint32_t num_regs,
struct qdma_reg_data *reg_list,
char *buf, uint32_t buflen);
#ifdef TANDEM_BOOT_SUPPORTED
int (*qdma_init_st_ctxt)(void *dev_hndl);
#endif
uint32_t mbox_base_pf;
uint32_t mbox_base_vf;
uint32_t qdma_max_errors;
......@@ -773,17 +778,19 @@ int qdma_hw_access_init(void *dev_hndl, uint8_t is_vf,
/*****************************************************************************/
/**
* qdma_acc_dump_config_regs() - Function to get qdma config registers
* qdma_acc_get_config_regs() - Function to get qdma config registers
*
* @dev_hndl: device handle
* @is_vf: Whether PF or VF
* @ip_type: QDMA IP Type
* @device_type:QDMA DEVICE Type
* @reg_data: pointer to register data to be filled
*
* Return: Length up-till the buffer is filled -success and < 0 - failure
*****************************************************************************/
int qdma_acc_get_config_regs(void *dev_hndl, uint8_t is_vf,
enum qdma_ip_type ip_type,
enum qdma_device_type device_type,
uint32_t *reg_data);
/*****************************************************************************/
......@@ -794,6 +801,7 @@ int qdma_acc_get_config_regs(void *dev_hndl, uint8_t is_vf,
* @dev_hndl: device handle
* @is_vf: Whether PF or VF
* @ip_type: QDMA IP Type
* @device_type:QDMA DEVICE Type
* @buf : pointer to buffer to be filled
* @buflen : Length of the buffer
*
......@@ -801,6 +809,7 @@ int qdma_acc_get_config_regs(void *dev_hndl, uint8_t is_vf,
*****************************************************************************/
int qdma_acc_dump_config_regs(void *dev_hndl, uint8_t is_vf,
enum qdma_ip_type ip_type,
enum qdma_device_type device_type,
char *buf, uint32_t buflen);
/*****************************************************************************/
......@@ -809,6 +818,7 @@ int qdma_acc_dump_config_regs(void *dev_hndl, uint8_t is_vf,
*
* @dev_hndl: device handle
* @ip_type: QDMA IP Type
* @device_type:QDMA DEVICE Type
* @reg_addr: Register Address
* @num_regs: Number of Registers
* @buf : pointer to buffer to be filled
......@@ -816,8 +826,8 @@ int qdma_acc_dump_config_regs(void *dev_hndl, uint8_t is_vf,
*
* Return: Length up-till the buffer is filled -success and < 0 - failure
*****************************************************************************/
int qdma_acc_dump_reg_info(void *dev_hndl,
enum qdma_ip_type ip_type, uint32_t reg_addr,
int qdma_acc_dump_reg_info(void *dev_hndl, enum qdma_ip_type ip_type,
enum qdma_device_type device_type, uint32_t reg_addr,
uint32_t num_regs, char *buf, uint32_t buflen);
/*****************************************************************************/
......@@ -828,6 +838,7 @@ int qdma_acc_dump_reg_info(void *dev_hndl,
*
* @dev_hndl: device handle
* @ip_type: QDMA IP Type
* @device_type:QDMA DEVICE Type
* @st: ST or MM
* @q_type: Queue Type
* @ctxt_data: Context Data
......@@ -838,6 +849,7 @@ int qdma_acc_dump_reg_info(void *dev_hndl,
*****************************************************************************/
int qdma_acc_dump_queue_context(void *dev_hndl,
enum qdma_ip_type ip_type,
enum qdma_device_type device_type,
uint8_t st,
enum qdma_dev_q_type q_type,
struct qdma_descq_context *ctxt_data,
......@@ -850,6 +862,7 @@ int qdma_acc_dump_queue_context(void *dev_hndl,
*
* @dev_hndl: device handle
* @ip_type: QDMA IP Type
* @device_type:QDMA DEVICE Type
* @qid_hw: queue id
* @st: ST or MM
* @q_type: Queue Type
......@@ -860,6 +873,8 @@ int qdma_acc_dump_queue_context(void *dev_hndl,
*****************************************************************************/
int qdma_acc_read_dump_queue_context(void *dev_hndl,
enum qdma_ip_type ip_type,
enum qdma_device_type device_type,
uint16_t func_id,
uint16_t qid_hw,
uint8_t st,
enum qdma_dev_q_type q_type,
......@@ -872,6 +887,7 @@ int qdma_acc_read_dump_queue_context(void *dev_hndl,
*
* @dev_hndl: device handle
* @ip_type: QDMA IP Type
* @device_type: QDMA DEVICE Type
* @total_regs : Max registers to read
* @reg_list : array of reg addr and reg values
* @buf : pointer to buffer to be filled
......@@ -881,6 +897,7 @@ int qdma_acc_read_dump_queue_context(void *dev_hndl,
*****************************************************************************/
int qdma_acc_dump_config_reg_list(void *dev_hndl,
enum qdma_ip_type ip_type,
enum qdma_device_type device_type,
uint32_t num_regs,
struct qdma_reg_data *reg_list,
char *buf, uint32_t buflen);
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......@@ -226,6 +226,10 @@ enum qdma_vivado_release_id {
QDMA_VIVADO_2020_1,
/** @QDMA_VIVADO_2020_2 - Vivado version 2020.2 */
QDMA_VIVADO_2020_2,
/** @QDMA_VIVADO_2021_1 - Vivado version 2021.1 */
QDMA_VIVADO_2021_1,
/** @QDMA_VIVADO_2022_1 - Vivado version 2022.1 */
QDMA_VIVADO_2022_1,
/** @QDMA_VIVADO_NONE - Not a valid Vivado version*/
QDMA_VIVADO_NONE
};
......@@ -247,8 +251,10 @@ enum qdma_ip_type {
enum qdma_device_type {
/** @QDMA_DEVICE_SOFT - UltraScale+ IP's */
QDMA_DEVICE_SOFT,
/** @QDMA_DEVICE_VERSAL -VERSAL IP */
QDMA_DEVICE_VERSAL,
/** @QDMA_DEVICE_VERSAL_CPM4 -VERSAL IP */
QDMA_DEVICE_VERSAL_CPM4,
/** @QDMA_DEVICE_VERSAL_CPM5 -VERSAL IP */
QDMA_DEVICE_VERSAL_CPM5,
/** @QDMA_DEVICE_NONE - Not a valid device */
QDMA_DEVICE_NONE
};
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......@@ -34,8 +34,8 @@
#define __QDMA_ACCESS_VERSION_H_
#define QDMA_VERSION_MAJOR 2020
#define QDMA_VERSION_MINOR 2
#define QDMA_VERSION_MAJOR 2022
#define QDMA_VERSION_MINOR 1
#define QDMA_VERSION_PATCH 0
#define QDMA_VERSION_STR \
......
/*
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __QDMA_CPM4_ACCESS_H_
#define __QDMA_CPM4_ACCESS_H_
#ifdef __cplusplus
extern "C" {
#endif
#include "qdma_platform.h"
/**
* enum qdma_error_idx - qdma errors
*/
enum qdma_cpm4_error_idx {
/* Descriptor errors */
QDMA_CPM4_DSC_ERR_POISON,
QDMA_CPM4_DSC_ERR_UR_CA,
QDMA_CPM4_DSC_ERR_PARAM,
QDMA_CPM4_DSC_ERR_ADDR,
QDMA_CPM4_DSC_ERR_TAG,
QDMA_CPM4_DSC_ERR_FLR,
QDMA_CPM4_DSC_ERR_TIMEOUT,
QDMA_CPM4_DSC_ERR_DAT_POISON,
QDMA_CPM4_DSC_ERR_FLR_CANCEL,
QDMA_CPM4_DSC_ERR_DMA,
QDMA_CPM4_DSC_ERR_DSC,
QDMA_CPM4_DSC_ERR_RQ_CANCEL,
QDMA_CPM4_DSC_ERR_DBE,
QDMA_CPM4_DSC_ERR_SBE,
QDMA_CPM4_DSC_ERR_ALL,
/* TRQ Errors */
QDMA_CPM4_TRQ_ERR_UNMAPPED,
QDMA_CPM4_TRQ_ERR_QID_RANGE,
QDMA_CPM4_TRQ_ERR_VF_ACCESS_ERR,
QDMA_CPM4_TRQ_ERR_TCP_TIMEOUT,
QDMA_CPM4_TRQ_ERR_ALL,
/* C2H Errors */
QDMA_CPM4_ST_C2H_ERR_MTY_MISMATCH,
QDMA_CPM4_ST_C2H_ERR_LEN_MISMATCH,
QDMA_CPM4_ST_C2H_ERR_QID_MISMATCH,
QDMA_CPM4_ST_C2H_ERR_DESC_RSP_ERR,
QDMA_CPM4_ST_C2H_ERR_ENG_WPL_DATA_PAR_ERR,
QDMA_CPM4_ST_C2H_ERR_MSI_INT_FAIL,
QDMA_CPM4_ST_C2H_ERR_ERR_DESC_CNT,
QDMA_CPM4_ST_C2H_ERR_PORTID_CTXT_MISMATCH,
QDMA_CPM4_ST_C2H_ERR_PORTID_BYP_IN_MISMATCH,
QDMA_CPM4_ST_C2H_ERR_WRB_INV_Q_ERR,
QDMA_CPM4_ST_C2H_ERR_WRB_QFULL_ERR,
QDMA_CPM4_ST_C2H_ERR_WRB_CIDX_ERR,
QDMA_CPM4_ST_C2H_ERR_WRB_PRTY_ERR,
QDMA_CPM4_ST_C2H_ERR_ALL,
/* Fatal Errors */
QDMA_CPM4_ST_FATAL_ERR_MTY_MISMATCH,
QDMA_CPM4_ST_FATAL_ERR_LEN_MISMATCH,
QDMA_CPM4_ST_FATAL_ERR_QID_MISMATCH,
QDMA_CPM4_ST_FATAL_ERR_TIMER_FIFO_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_PFCH_II_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_WRB_CTXT_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_PFCH_CTXT_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_DESC_REQ_FIFO_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_INT_CTXT_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_INT_QID2VEC_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_WRB_COAL_DATA_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_TUSER_FIFO_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_QID_FIFO_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_PAYLOAD_FIFO_RAM_RDBE,
QDMA_CPM4_ST_FATAL_ERR_WPL_DATA_PAR_ERR,
QDMA_CPM4_ST_FATAL_ERR_ALL,
/* H2C Errors */
QDMA_CPM4_ST_H2C_ERR_ZERO_LEN_DESC_ERR,
QDMA_CPM4_ST_H2C_ERR_SDI_MRKR_REQ_MOP_ERR,
QDMA_CPM4_ST_H2C_ERR_NO_DMA_DSC,
QDMA_CPM4_ST_H2C_ERR_DBE,
QDMA_CPM4_ST_H2C_ERR_SBE,
QDMA_CPM4_ST_H2C_ERR_ALL,
/* Single bit errors */
QDMA_CPM4_SBE_ERR_MI_H2C0_DAT,
QDMA_CPM4_SBE_ERR_MI_C2H0_DAT,
QDMA_CPM4_SBE_ERR_H2C_RD_BRG_DAT,
QDMA_CPM4_SBE_ERR_H2C_WR_BRG_DAT,
QDMA_CPM4_SBE_ERR_C2H_RD_BRG_DAT,
QDMA_CPM4_SBE_ERR_C2H_WR_BRG_DAT,
QDMA_CPM4_SBE_ERR_FUNC_MAP,
QDMA_CPM4_SBE_ERR_DSC_HW_CTXT,
QDMA_CPM4_SBE_ERR_DSC_CRD_RCV,
QDMA_CPM4_SBE_ERR_DSC_SW_CTXT,
QDMA_CPM4_SBE_ERR_DSC_CPLI,
QDMA_CPM4_SBE_ERR_DSC_CPLD,
QDMA_CPM4_SBE_ERR_PASID_CTXT_RAM,
QDMA_CPM4_SBE_ERR_TIMER_FIFO_RAM,
QDMA_CPM4_SBE_ERR_PAYLOAD_FIFO_RAM,
QDMA_CPM4_SBE_ERR_QID_FIFO_RAM,
QDMA_CPM4_SBE_ERR_TUSER_FIFO_RAM,
QDMA_CPM4_SBE_ERR_WRB_COAL_DATA_RAM,
QDMA_CPM4_SBE_ERR_INT_QID2VEC_RAM,
QDMA_CPM4_SBE_ERR_INT_CTXT_RAM,
QDMA_CPM4_SBE_ERR_DESC_REQ_FIFO_RAM,
QDMA_CPM4_SBE_ERR_PFCH_CTXT_RAM,
QDMA_CPM4_SBE_ERR_WRB_CTXT_RAM,
QDMA_CPM4_SBE_ERR_PFCH_LL_RAM,
QDMA_CPM4_SBE_ERR_ALL,
/* Double bit Errors */
QDMA_CPM4_DBE_ERR_MI_H2C0_DAT,
QDMA_CPM4_DBE_ERR_MI_C2H0_DAT,
QDMA_CPM4_DBE_ERR_H2C_RD_BRG_DAT,
QDMA_CPM4_DBE_ERR_H2C_WR_BRG_DAT,
QDMA_CPM4_DBE_ERR_C2H_RD_BRG_DAT,
QDMA_CPM4_DBE_ERR_C2H_WR_BRG_DAT,
QDMA_CPM4_DBE_ERR_FUNC_MAP,
QDMA_CPM4_DBE_ERR_DSC_HW_CTXT,
QDMA_CPM4_DBE_ERR_DSC_CRD_RCV,
QDMA_CPM4_DBE_ERR_DSC_SW_CTXT,
QDMA_CPM4_DBE_ERR_DSC_CPLI,
QDMA_CPM4_DBE_ERR_DSC_CPLD,
QDMA_CPM4_DBE_ERR_PASID_CTXT_RAM,
QDMA_CPM4_DBE_ERR_TIMER_FIFO_RAM,
QDMA_CPM4_DBE_ERR_PAYLOAD_FIFO_RAM,
QDMA_CPM4_DBE_ERR_QID_FIFO_RAM,
QDMA_CPM4_DBE_ERR_WRB_COAL_DATA_RAM,
QDMA_CPM4_DBE_ERR_INT_QID2VEC_RAM,
QDMA_CPM4_DBE_ERR_INT_CTXT_RAM,
QDMA_CPM4_DBE_ERR_DESC_REQ_FIFO_RAM,
QDMA_CPM4_DBE_ERR_PFCH_CTXT_RAM,
QDMA_CPM4_DBE_ERR_WRB_CTXT_RAM,
QDMA_CPM4_DBE_ERR_PFCH_LL_RAM,
QDMA_CPM4_DBE_ERR_ALL,
QDMA_CPM4_ERRS_ALL
};
struct qdma_cpm4_hw_err_info {
enum qdma_cpm4_error_idx idx;
const char *err_name;
uint32_t mask_reg_addr;
uint32_t stat_reg_addr;
uint32_t leaf_err_mask;
uint32_t global_err_mask;
void (*qdma_cpm4_hw_err_process)(void *dev_hndl);
};
int qdma_cpm4_init_ctxt_memory(void *dev_hndl);
int qdma_cpm4_qid2vec_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
struct qdma_qid2vec *ctxt,
enum qdma_hw_access_type access_type);
int qdma_cpm4_fmap_conf(void *dev_hndl, uint16_t func_id,
struct qdma_fmap_cfg *config,
enum qdma_hw_access_type access_type);
int qdma_cpm4_sw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
struct qdma_descq_sw_ctxt *ctxt,
enum qdma_hw_access_type access_type);
int qdma_cpm4_pfetch_ctx_conf(void *dev_hndl, uint16_t hw_qid,
struct qdma_descq_prefetch_ctxt *ctxt,
enum qdma_hw_access_type access_type);
int qdma_cpm4_cmpt_ctx_conf(void *dev_hndl, uint16_t hw_qid,
struct qdma_descq_cmpt_ctxt *ctxt,
enum qdma_hw_access_type access_type);
int qdma_cpm4_hw_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
struct qdma_descq_hw_ctxt *ctxt,
enum qdma_hw_access_type access_type);
int qdma_cpm4_credit_ctx_conf(void *dev_hndl, uint8_t c2h, uint16_t hw_qid,
struct qdma_descq_credit_ctxt *ctxt,
enum qdma_hw_access_type access_type);
int qdma_cpm4_indirect_intr_ctx_conf(void *dev_hndl, uint16_t ring_index,
struct qdma_indirect_intr_ctxt *ctxt,
enum qdma_hw_access_type access_type);
int qdma_cpm4_set_default_global_csr(void *dev_hndl);
int qdma_cpm4_queue_pidx_update(void *dev_hndl, uint8_t is_vf, uint16_t qid,
uint8_t is_c2h, const struct qdma_q_pidx_reg_info *reg_info);
int qdma_cpm4_queue_cmpt_cidx_update(void *dev_hndl, uint8_t is_vf,
uint16_t qid, const struct qdma_q_cmpt_cidx_reg_info *reg_info);
int qdma_cpm4_queue_intr_cidx_update(void *dev_hndl, uint8_t is_vf,
uint16_t qid, const struct qdma_intr_cidx_reg_info *reg_info);
int qdma_cmp_get_user_bar(void *dev_hndl, uint8_t is_vf,
uint16_t func_id, uint8_t *user_bar);
int qdma_cpm4_get_device_attributes(void *dev_hndl,
struct qdma_dev_attributes *dev_info);
uint32_t qdma_cpm4_reg_dump_buf_len(void);
int qdma_cpm4_context_buf_len(uint8_t st,
enum qdma_dev_q_type q_type, uint32_t *req_buflen);
int qdma_cpm4_dump_config_regs(void *dev_hndl, uint8_t is_vf,
char *buf, uint32_t buflen);
int qdma_cpm4_hw_error_process(void *dev_hndl);
const char *qdma_cpm4_hw_get_error_name(uint32_t err_idx);
int qdma_cpm4_hw_error_enable(void *dev_hndl, uint32_t err_idx);
int qdma_cpm4_dump_queue_context(void *dev_hndl,
uint8_t st,
enum qdma_dev_q_type q_type,
struct qdma_descq_context *ctxt_data,
char *buf, uint32_t buflen);
int qdma_cpm4_dump_intr_context(void *dev_hndl,
struct qdma_indirect_intr_ctxt *intr_ctx,
int ring_index,
char *buf, uint32_t buflen);
int qdma_cpm4_read_dump_queue_context(void *dev_hndl,
uint16_t func_id,
uint16_t qid_hw,
uint8_t st,
enum qdma_dev_q_type q_type,
char *buf, uint32_t buflen);
int qdma_cpm4_dump_config_reg_list(void *dev_hndl,
uint32_t total_regs,
struct qdma_reg_data *reg_list,
char *buf, uint32_t buflen);
int qdma_cpm4_read_reg_list(void *dev_hndl, uint8_t is_vf,
uint16_t reg_rd_slot,
uint16_t *total_regs,
struct qdma_reg_data *reg_list);
int qdma_cpm4_global_csr_conf(void *dev_hndl, uint8_t index,
uint8_t count,
uint32_t *csr_val,
enum qdma_global_csr_type csr_type,
enum qdma_hw_access_type access_type);
int qdma_cpm4_global_writeback_interval_conf(void *dev_hndl,
enum qdma_wrb_interval *wb_int,
enum qdma_hw_access_type access_type);
int qdma_cpm4_mm_channel_conf(void *dev_hndl, uint8_t channel,
uint8_t is_c2h,
uint8_t enable);
int qdma_cpm4_dump_reg_info(void *dev_hndl, uint32_t reg_addr,
uint32_t num_regs, char *buf, uint32_t buflen);
uint32_t qdma_cpm4_get_config_num_regs(void);
struct xreg_info *qdma_cpm4_get_config_regs(void);
#ifdef __cplusplus
}
#endif
#endif /* __QDMA_CPM4_ACCESS_H_ */
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......@@ -938,8 +938,8 @@ static int mbox_write_queue_contexts(void *dev_hndl, uint8_t dma_device_index,
return QDMA_SUCCESS;
}
static int mbox_read_queue_contexts(void *dev_hndl, uint16_t qid_hw,
uint8_t st, uint8_t c2h,
static int mbox_read_queue_contexts(void *dev_hndl, uint16_t func_id,
uint16_t qid_hw, uint8_t st, uint8_t c2h,
enum mbox_cmpt_ctxt_type cmpt_ctxt_type,
struct qdma_descq_context *ctxt)
{
......@@ -972,6 +972,14 @@ static int mbox_read_queue_contexts(void *dev_hndl, uint16_t qid_hw,
return rv;
}
rv = hw->qdma_fmap_conf(dev_hndl, func_id, &ctxt->fmap,
QDMA_HW_ACCESS_READ);
if (rv < 0) {
qdma_log_error("%s: read fmap ctxt, err:%d\n",
__func__, rv);
return rv;
}
if (st && c2h) {
rv = hw->qdma_pfetch_ctx_conf(dev_hndl,
qid_hw, &ctxt->pfetch_ctxt,
......@@ -1339,7 +1347,8 @@ int qdma_mbox_pf_rcv_msg_handler(void *dev_hndl, uint8_t dma_device_index,
{
struct mbox_msg_qctxt *qctxt = &rcv->qctxt;
rv = mbox_read_queue_contexts(dev_hndl, qctxt->qid_hw,
rv = mbox_read_queue_contexts(dev_hndl, hdr->src_func_id,
qctxt->qid_hw,
qctxt->st,
qctxt->c2h,
qctxt->cmpt_ctxt_type,
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......@@ -55,6 +55,8 @@
#define QDMA_REG_GROUP_3_START_ADDR 0xB00
#define QDMA_REG_GROUP_4_START_ADDR 0x1014
#define QDMA_DEFAULT_PFCH_STOP_THRESH 256
static void qdma_hw_st_h2c_err_process(void *dev_hndl);
static void qdma_hw_st_c2h_err_process(void *dev_hndl);
static void qdma_hw_desc_err_process(void *dev_hndl);
......@@ -1628,6 +1630,11 @@ static struct qctx_entry c2h_pftch_ctxt_entries[] = {
{"Valid", 0},
};
static struct qctx_entry fmap_ctxt_entries[] = {
{"Queue Base", 0},
{"Queue Max", 0},
};
static struct qctx_entry ind_intr_ctxt_entries[] = {
{"valid", 0},
{"vec", 0},
......@@ -1684,6 +1691,10 @@ int qdma_soft_context_buf_len(uint8_t st,
sizeof(credit_ctxt_entries[0])) + 1) *
REG_DUMP_SIZE_PER_LINE);
len += (((sizeof(fmap_ctxt_entries) /
sizeof(fmap_ctxt_entries[0])) + 1) *
REG_DUMP_SIZE_PER_LINE);
if (st && (q_type == QDMA_DEV_Q_TYPE_C2H)) {
len += (((sizeof(cmpt_ctxt_entries) /
sizeof(cmpt_ctxt_entries[0])) + 1) *
......@@ -1808,6 +1819,17 @@ static void qdma_fill_pfetch_ctxt(struct qdma_descq_prefetch_ctxt *pfetch_ctxt)
c2h_pftch_ctxt_entries[7].value = pfetch_ctxt->valid;
}
/*
* qdma_acc_fill_fmap_ctxt() - Helper function to fill fmap context
* into structure
*
*/
static void qdma_fill_fmap_ctxt(struct qdma_fmap_cfg *fmap_ctxt)
{
fmap_ctxt_entries[0].value = fmap_ctxt->qbase;
fmap_ctxt_entries[1].value = fmap_ctxt->qmax;
}
/*
* dump_soft_context() - Helper function to dump queue context into string
*
......@@ -1846,6 +1868,8 @@ static int dump_soft_context(struct qdma_descq_context *queue_context,
}
}
qdma_fill_fmap_ctxt(&queue_context->fmap);
for (i = 0; i < DEBGFS_LINE_SZ - 5; i++) {
rv = QDMA_SNPRINTF_S(banner + i,
(DEBGFS_LINE_SZ - i),
......@@ -2175,6 +2199,69 @@ static int dump_soft_context(struct qdma_descq_context *queue_context,
}
}
/* Fmap context dump */
n = sizeof(fmap_ctxt_entries) /
sizeof(fmap_ctxt_entries[0]);
for (i = 0; i < n; i++) {
if ((len >= buf_sz) ||
((len + DEBGFS_LINE_SZ) >= buf_sz))
goto INSUF_BUF_EXIT;
if (i == 0) {
if ((len + (3 * DEBGFS_LINE_SZ)) >= buf_sz)
goto INSUF_BUF_EXIT;
rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
DEBGFS_LINE_SZ, "\n%s", banner);
if ((rv < 0) || (rv > DEBGFS_LINE_SZ)) {
qdma_log_error(
"%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
__LINE__, __func__,
rv);
goto INSUF_BUF_EXIT;
}
len += rv;
rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
DEBGFS_LINE_SZ, "\n%40s",
"Fmap Context");
if ((rv < 0) || (rv > DEBGFS_LINE_SZ)) {
qdma_log_error(
"%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
__LINE__, __func__,
rv);
goto INSUF_BUF_EXIT;
}
len += rv;
rv = QDMA_SNPRINTF_S(buf + len, (buf_sz - len),
DEBGFS_LINE_SZ, "\n%s\n", banner);
if ((rv < 0) || (rv > DEBGFS_LINE_SZ)) {
qdma_log_error(
"%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
__LINE__, __func__,
rv);
goto INSUF_BUF_EXIT;
}
len += rv;
}
rv = QDMA_SNPRINTF_S(buf + len,
(buf_sz - len), DEBGFS_LINE_SZ,
"%-47s %#-10x %u\n",
fmap_ctxt_entries[i].name,
fmap_ctxt_entries[i].value,
fmap_ctxt_entries[i].value);
if ((rv < 0) || (rv > DEBGFS_LINE_SZ)) {
qdma_log_error(
"%d:%s QDMA_SNPRINTF_S() failed, err:%d\n",
__LINE__, __func__,
rv);
goto INSUF_BUF_EXIT;
}
len += rv;
}
return len;
INSUF_BUF_EXIT:
......@@ -2465,6 +2552,7 @@ static int qdma_fmap_read(void *dev_hndl, uint16_t func_id,
qdma_log_debug("%s: func_id=%hu, qbase=%hu, qmax=%hu\n", __func__,
func_id, config->qbase, config->qmax);
return QDMA_SUCCESS;
}
......@@ -3876,7 +3964,7 @@ int qdma_set_default_global_csr(void *dev_hndl)
QDMA_OFFSET_C2H_PFETCH_CACHE_DEPTH);
reg_val =
FIELD_SET(QDMA_C2H_PFCH_FL_TH_MASK,
DEFAULT_PFCH_STOP_THRESH) |
QDMA_DEFAULT_PFCH_STOP_THRESH) |
FIELD_SET(QDMA_C2H_NUM_PFCH_MASK,
DEFAULT_PFCH_NUM_ENTRIES_PER_Q) |
FIELD_SET(QDMA_C2H_PFCH_QCNT_MASK, (cfg_val >> 1)) |
......@@ -4071,7 +4159,7 @@ int qdma_queue_intr_cidx_update(void *dev_hndl, uint8_t is_vf,
* Return: 0 - success and < 0 - failure
*****************************************************************************/
int qdma_get_user_bar(void *dev_hndl, uint8_t is_vf,
uint8_t func_id, uint8_t *user_bar)
uint16_t func_id, uint8_t *user_bar)
{
uint8_t bar_found = 0;
uint8_t bar_idx = 0;
......@@ -4896,6 +4984,7 @@ int qdma_soft_dump_queue_context(void *dev_hndl,
* Return: Length up-till the buffer is filled -success and < 0 - failure
*****************************************************************************/
int qdma_soft_read_dump_queue_context(void *dev_hndl,
uint16_t func_id,
uint16_t qid_hw,
uint8_t st,
enum qdma_dev_q_type q_type,
......@@ -4995,6 +5084,16 @@ int qdma_soft_read_dump_queue_context(void *dev_hndl,
}
}
rv = qdma_fmap_conf(dev_hndl, func_id,
&(context.fmap),
QDMA_HW_ACCESS_READ);
if (rv < 0) {
qdma_log_error(
"%s:fmap ctxt read fail, err = %d",
__func__, rv);
return rv;
}
rv = dump_soft_context(&context, st, q_type, buf, buflen);
return rv;
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......@@ -259,6 +259,7 @@ int qdma_soft_dump_queue_context(void *dev_hndl,
char *buf, uint32_t buflen);
int qdma_soft_read_dump_queue_context(void *dev_hndl,
uint16_t func_id,
uint16_t qid_hw,
uint8_t st,
enum qdma_dev_q_type q_type,
......@@ -274,7 +275,7 @@ int qdma_get_device_attributes(void *dev_hndl,
struct qdma_dev_attributes *dev_info);
int qdma_get_user_bar(void *dev_hndl, uint8_t is_vf,
uint8_t func_id, uint8_t *user_bar);
uint16_t func_id, uint8_t *user_bar);
int qdma_soft_dump_config_reg_list(void *dev_hndl,
uint32_t total_regs,
......
/*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* BSD LICENSE
*
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -442,18 +442,40 @@ static int h2c_byp_mode_check_handler(__rte_unused const char *key,
return 0;
}
#ifdef TANDEM_BOOT_SUPPORTED
static int en_st_mode_check_handler(__rte_unused const char *key,
const char *value, void *opaque)
{
struct qdma_pci_dev *qdma_dev = (struct qdma_pci_dev *)opaque;
char *end = NULL;
PMD_DRV_LOG(INFO, "QDMA devargs en_st is: %s\n", value);
qdma_dev->en_st_mode = (uint8_t)strtoul(value, &end, 10);
if (qdma_dev->en_st_mode > 1) {
PMD_DRV_LOG(INFO, "QDMA devargs incorrect"
" en_st_mode =%d specified\n",
qdma_dev->en_st_mode);
return -1;
}
return 0;
}
#endif
/* Process the all devargs */
int qdma_check_kvargs(struct rte_devargs *devargs,
struct qdma_pci_dev *qdma_dev)
{
struct rte_kvargs *kvlist;
const char *pfetch_key = "desc_prefetch";
const char *pfetch_key = "desc_prefetch";
const char *cmpt_desc_len_key = "cmpt_desc_len";
const char *trigger_mode_key = "trigger_mode";
const char *config_bar_key = "config_bar";
const char *c2h_byp_mode_key = "c2h_byp_mode";
const char *h2c_byp_mode_key = "h2c_byp_mode";
const char *trigger_mode_key = "trigger_mode";
const char *config_bar_key = "config_bar";
const char *c2h_byp_mode_key = "c2h_byp_mode";
const char *h2c_byp_mode_key = "h2c_byp_mode";
#ifdef TANDEM_BOOT_SUPPORTED
const char *en_st_key = "en_st";
#endif
int ret = 0;
if (!devargs)
......@@ -523,6 +545,18 @@ int qdma_check_kvargs(struct rte_devargs *devargs,
}
}
#ifdef TANDEM_BOOT_SUPPORTED
/* Enable ST */
if (rte_kvargs_count(kvlist, en_st_key)) {
ret = rte_kvargs_process(kvlist, en_st_key,
en_st_mode_check_handler, qdma_dev);
if (ret) {
rte_kvargs_free(kvlist);
return ret;
}
}
#endif
rte_kvargs_free(kvlist);
return ret;
}
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -504,6 +504,14 @@ int qdma_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
QDMA_NUM_C2H_COUNTERS,
qdma_dev->g_c2h_cnt_th[rxq->threshidx]);
if (rxq->sorted_c2h_cntr_idx < 0) {
PMD_DRV_LOG(ERR,
"Expected counter threshold %d not found\n",
qdma_dev->g_c2h_cnt_th[rxq->threshidx]);
err = -EINVAL;
goto rx_setup_err;
}
/* Initialize pend_pkt_moving_avg */
rxq->pend_pkt_moving_avg = qdma_dev->g_c2h_cnt_th[rxq->threshidx];
......@@ -1775,6 +1783,7 @@ qdma_dev_get_regs(struct rte_eth_dev *dev,
ret = qdma_acc_get_num_config_regs(dev,
(enum qdma_ip_type)qdma_dev->ip_type,
(enum qdma_device_type)qdma_dev->device_type,
&reg_length);
if (ret < 0 || reg_length == 0) {
PMD_DRV_LOG(ERR, "%s: Failed to get number of config registers\n",
......@@ -1793,7 +1802,8 @@ qdma_dev_get_regs(struct rte_eth_dev *dev,
(regs->length == (reg_length - 1))) {
regs->version = 1;
ret = qdma_acc_get_config_regs(dev, qdma_dev->is_vf,
(enum qdma_ip_type)qdma_dev->ip_type, data);
(enum qdma_ip_type)qdma_dev->ip_type,
(enum qdma_device_type)qdma_dev->device_type, data);
if (ret < 0) {
PMD_DRV_LOG(ERR, "%s: Failed to get config registers\n",
__func__);
......
/*-
* BSD LICENSE
*
* Copyright(c) 2020-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2020-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -708,6 +708,17 @@ int qdma_eth_dev_init(struct rte_eth_dev *dev)
return -EINVAL;
}
#ifdef TANDEM_BOOT_SUPPORTED
if (dma_priv->en_st_mode) {
ret = dma_priv->hw_access->qdma_init_st_ctxt(dev);
if (ret < 0) {
PMD_DRV_LOG(ERR,
"%s: Failed to initialize st ctxt memory, err = %d\n",
__func__, ret);
return -EINVAL;
}
}
#endif
dma_priv->hw_access->qdma_hw_error_enable(dev,
dma_priv->hw_access->qdma_max_errors);
if (ret < 0) {
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -408,6 +408,9 @@ static void adjust_c2h_cntr_avgs(struct qdma_rx_queue *rxq)
int i;
struct qdma_pci_dev *qdma_dev = rxq->dev->data->dev_private;
if (rxq->sorted_c2h_cntr_idx < 0)
return;
rxq->pend_pkt_moving_avg =
qdma_dev->g_c2h_cnt_th[rxq->cmpt_cidx_info.counter_idx];
......@@ -921,8 +924,12 @@ static uint16_t prepare_packets_v(struct qdma_rx_queue *rxq,
pkt_mb2);
/* Accumulate packet length counter */
pktlen = _mm_add_epi32(pktlen, pkt_len[0]);
pktlen = _mm_add_epi32(pktlen, pkt_len[1]);
pktlen = _mm_add_epi64(pktlen,
_mm_set_epi16(0, 0, 0, 0,
0, 0, 0, pktlen1));
pktlen = _mm_add_epi64(pktlen,
_mm_set_epi16(0, 0, 0, 0,
0, 0, 0, pktlen2));
count_pkts += RTE_QDMA_DESCS_PER_LOOP;
id += RTE_QDMA_DESCS_PER_LOOP;
......@@ -935,14 +942,18 @@ static uint16_t prepare_packets_v(struct qdma_rx_queue *rxq,
mb = prepare_segmented_packet(rxq,
pktlen1, &id);
rx_pkts[count_pkts++] = mb;
pktlen = _mm_add_epi32(pktlen, pkt_len[0]);
pktlen = _mm_add_epi64(pktlen,
_mm_set_epi16(0, 0, 0, 0,
0, 0, 0, pktlen1));
}
if (pktlen2) {
mb = prepare_segmented_packet(rxq,
pktlen2, &id);
rx_pkts[count_pkts++] = mb;
pktlen = _mm_add_epi32(pktlen, pkt_len[1]);
pktlen = _mm_add_epi64(pktlen,
_mm_set_epi16(0, 0, 0, 0,
0, 0, 0, pktlen2));
}
}
}
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -133,7 +133,8 @@ int qdma_ul_process_immediate_data_st(void *qhndl, void *cmpt_entry,
#else
qdma_get_device_info(qhndl, &dev_type, &ip_type);
if (ip_type == QDMA_VERSAL_HARD_IP) {
if (ip_type == QDMA_VERSAL_HARD_IP &&
dev_type == QDMA_DEVICE_VERSAL_CPM4) {
//Ignoring first 20 bits of length feild
dprintf(ofd, "%02x",
(*((uint8_t *)cmpt_entry + 2) & 0xF0));
......
/*-
* BSD LICENSE
*
* Copyright(c) 2018-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2018-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -1116,7 +1116,9 @@ static int eth_qdma_vf_dev_init(struct rte_eth_dev *dev)
dma_priv->bar_addr[dma_priv->user_bar_idx] = baseaddr;
}
if (dma_priv->ip_type == QDMA_VERSAL_HARD_IP)
if (dma_priv->ip_type == QDMA_VERSAL_HARD_IP &&
dma_priv->device_type ==
QDMA_DEVICE_VERSAL_CPM4)
dma_priv->dev_cap.mailbox_intr = 0;
else
dma_priv->dev_cap.mailbox_intr = 1;
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -246,6 +246,7 @@ static int qdma_config_reg_dump(uint8_t port_id)
struct rte_eth_dev *dev;
struct qdma_pci_dev *qdma_dev;
enum qdma_ip_type ip_type;
enum qdma_device_type device_type;
char *buf = NULL;
int buflen;
int ret;
......@@ -261,6 +262,7 @@ static int qdma_config_reg_dump(uint8_t port_id)
dev = &rte_eth_devices[port_id];
qdma_dev = dev->data->dev_private;
ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
device_type = (enum qdma_device_type)qdma_dev->device_type;
if (qdma_dev->is_vf) {
reg_len = (QDMA_MAX_REGISTER_DUMP *
......@@ -273,7 +275,8 @@ static int qdma_config_reg_dump(uint8_t port_id)
return -ENOMEM;
}
ret = qdma_acc_reg_dump_buf_len(dev, ip_type, &buflen);
ret = qdma_acc_reg_dump_buf_len(dev, ip_type,
device_type, &buflen);
if (ret < 0) {
xdebug_error("Failed to get register dump buffer length\n");
return ret;
......@@ -308,7 +311,7 @@ static int qdma_config_reg_dump(uint8_t port_id)
}
rcv_len = qdma_acc_dump_config_reg_list(dev,
ip_type, num_regs,
ip_type, device_type, num_regs,
reg_list, buf + len, buflen - len);
if (len < 0) {
xdebug_error("Failed to dump config regs "
......@@ -330,7 +333,7 @@ static int qdma_config_reg_dump(uint8_t port_id)
rte_free(buf);
} else {
ret = qdma_acc_reg_dump_buf_len(dev,
ip_type, &buflen);
ip_type, device_type, &buflen);
if (ret < 0) {
xdebug_error("Failed to get register dump buffer length\n");
return ret;
......@@ -350,7 +353,7 @@ static int qdma_config_reg_dump(uint8_t port_id)
" Value(Hex) Value(Dec)\n");
ret = qdma_acc_dump_config_regs(dev, qdma_dev->is_vf,
ip_type, buf, buflen);
ip_type, device_type, buf, buflen);
if (ret < 0) {
xdebug_error("Insufficient space to dump Config Bar register values\n");
rte_free(buf);
......@@ -493,6 +496,7 @@ static int qdma_c2h_context_dump(uint8_t port_id, uint16_t queue)
struct qdma_descq_context queue_context;
enum qdma_dev_q_type q_type;
enum qdma_ip_type ip_type;
enum qdma_device_type device_type;
uint16_t qid;
uint8_t st_mode;
char *buf = NULL;
......@@ -508,6 +512,7 @@ static int qdma_c2h_context_dump(uint8_t port_id, uint16_t queue)
qdma_dev = dev->data->dev_private;
qid = qdma_dev->queue_base + queue;
ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
device_type = (enum qdma_device_type)qdma_dev->device_type;
st_mode = qdma_dev->q_info[qid].queue_mode;
q_type = QDMA_DEV_Q_TYPE_C2H;
......@@ -520,7 +525,7 @@ static int qdma_c2h_context_dump(uint8_t port_id, uint16_t queue)
"\n ***** C2H Queue Contexts on port_id: %d for q_id: %d *****\n",
port_id, qid);
ret = qdma_acc_context_buf_len(dev, ip_type, st_mode,
ret = qdma_acc_context_buf_len(dev, ip_type, device_type, st_mode,
q_type, &buflen);
if (ret < 0) {
xdebug_error("Failed to get context buffer length,\n");
......@@ -545,7 +550,7 @@ static int qdma_c2h_context_dump(uint8_t port_id, uint16_t queue)
return qdma_get_error_code(ret);
}
ret = qdma_acc_dump_queue_context(dev, ip_type,
ret = qdma_acc_dump_queue_context(dev, ip_type, device_type,
st_mode, q_type, &queue_context, buf, buflen);
if (ret < 0) {
xdebug_error("Failed to dump c2h queue context\n");
......@@ -553,8 +558,9 @@ static int qdma_c2h_context_dump(uint8_t port_id, uint16_t queue)
return qdma_get_error_code(ret);
}
} else {
ret = qdma_acc_read_dump_queue_context(dev, ip_type,
qid, st_mode, q_type, buf, buflen);
ret = qdma_acc_read_dump_queue_context(dev,
ip_type, device_type, qdma_dev->func_id, qid,
st_mode, q_type, buf, buflen);
if (ret < 0) {
xdebug_error("Failed to read and dump c2h queue context\n");
rte_free(buf);
......@@ -575,6 +581,7 @@ static int qdma_h2c_context_dump(uint8_t port_id, uint16_t queue)
struct qdma_descq_context queue_context;
enum qdma_dev_q_type q_type;
enum qdma_ip_type ip_type;
enum qdma_device_type device_type;
uint32_t buflen = 0;
uint16_t qid;
uint8_t st_mode;
......@@ -590,6 +597,7 @@ static int qdma_h2c_context_dump(uint8_t port_id, uint16_t queue)
qdma_dev = dev->data->dev_private;
qid = qdma_dev->queue_base + queue;
ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
device_type = (enum qdma_device_type)qdma_dev->device_type;
st_mode = qdma_dev->q_info[qid].queue_mode;
q_type = QDMA_DEV_Q_TYPE_H2C;
......@@ -602,7 +610,7 @@ static int qdma_h2c_context_dump(uint8_t port_id, uint16_t queue)
"\n ***** H2C Queue Contexts on port_id: %d for q_id: %d *****\n",
port_id, qid);
ret = qdma_acc_context_buf_len(dev, ip_type, st_mode,
ret = qdma_acc_context_buf_len(dev, ip_type, device_type, st_mode,
q_type, &buflen);
if (ret < 0) {
xdebug_error("Failed to get context buffer length,\n");
......@@ -628,15 +636,17 @@ static int qdma_h2c_context_dump(uint8_t port_id, uint16_t queue)
}
ret = qdma_acc_dump_queue_context(dev, ip_type,
st_mode, q_type, &queue_context, buf, buflen);
device_type, st_mode, q_type,
&queue_context, buf, buflen);
if (ret < 0) {
xdebug_error("Failed to dump h2c queue context\n");
rte_free(buf);
return qdma_get_error_code(ret);
}
} else {
ret = qdma_acc_read_dump_queue_context(dev, ip_type,
qid, st_mode, q_type, buf, buflen);
ret = qdma_acc_read_dump_queue_context(dev,
ip_type, device_type, qdma_dev->func_id, qid,
st_mode, q_type, buf, buflen);
if (ret < 0) {
xdebug_error("Failed to read and dump h2c queue context\n");
rte_free(buf);
......@@ -657,6 +667,7 @@ static int qdma_cmpt_context_dump(uint8_t port_id, uint16_t queue)
struct qdma_descq_context queue_context;
enum qdma_dev_q_type q_type;
enum qdma_ip_type ip_type;
enum qdma_device_type device_type;
uint32_t buflen;
uint16_t qid;
uint8_t st_mode;
......@@ -672,6 +683,7 @@ static int qdma_cmpt_context_dump(uint8_t port_id, uint16_t queue)
qdma_dev = dev->data->dev_private;
qid = qdma_dev->queue_base + queue;
ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
device_type = (enum qdma_device_type)qdma_dev->device_type;
st_mode = qdma_dev->q_info[qid].queue_mode;
q_type = QDMA_DEV_Q_TYPE_CMPT;
......@@ -684,7 +696,7 @@ static int qdma_cmpt_context_dump(uint8_t port_id, uint16_t queue)
"\n ***** CMPT Queue Contexts on port_id: %d for q_id: %d *****\n",
port_id, qid);
ret = qdma_acc_context_buf_len(dev, ip_type,
ret = qdma_acc_context_buf_len(dev, ip_type, device_type,
st_mode, q_type, &buflen);
if (ret < 0) {
xdebug_error("Failed to get context buffer length\n");
......@@ -710,7 +722,7 @@ static int qdma_cmpt_context_dump(uint8_t port_id, uint16_t queue)
}
ret = qdma_acc_dump_queue_context(dev, ip_type,
st_mode, q_type,
device_type, st_mode, q_type,
&queue_context, buf, buflen);
if (ret < 0) {
xdebug_error("Failed to dump cmpt queue context\n");
......@@ -719,7 +731,7 @@ static int qdma_cmpt_context_dump(uint8_t port_id, uint16_t queue)
}
} else {
ret = qdma_acc_read_dump_queue_context(dev,
ip_type, qid, st_mode,
ip_type, device_type, qdma_dev->func_id, qid, st_mode,
q_type, buf, buflen);
if (ret < 0) {
xdebug_error("Failed to read and dump cmpt queue context\n");
......@@ -966,6 +978,7 @@ int rte_pmd_qdma_dbg_reg_info_dump(uint8_t port_id,
struct rte_eth_dev *dev;
struct qdma_pci_dev *qdma_dev;
enum qdma_ip_type ip_type;
enum qdma_device_type device_type;
char *buf = NULL;
int buflen = QDMA_MAX_BUFLEN;
int ret;
......@@ -978,6 +991,7 @@ int rte_pmd_qdma_dbg_reg_info_dump(uint8_t port_id,
dev = &rte_eth_devices[port_id];
qdma_dev = dev->data->dev_private;
ip_type = (enum qdma_ip_type)qdma_dev->ip_type;
device_type = (enum qdma_device_type)qdma_dev->device_type;
/*allocate memory for register dump*/
buf = (char *)rte_zmalloc("QDMA_DUMP_BUF_REG_INFO", buflen,
......@@ -988,7 +1002,7 @@ int rte_pmd_qdma_dbg_reg_info_dump(uint8_t port_id,
return -ENOMEM;
}
ret = qdma_acc_dump_reg_info(dev, ip_type,
ret = qdma_acc_dump_reg_info(dev, ip_type, device_type,
reg_addr, num_regs, buf, buflen);
if (ret < 0) {
xdebug_error("Failed to dump reg field values\n");
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -1160,8 +1160,11 @@ int rte_pmd_qdma_get_device_capabilities(int port_id,
case QDMA_DEVICE_SOFT:
dev_attr->device_type = RTE_PMD_QDMA_DEVICE_SOFT;
break;
case QDMA_DEVICE_VERSAL:
dev_attr->device_type = RTE_PMD_QDMA_DEVICE_VERSAL;
case QDMA_DEVICE_VERSAL_CPM4:
dev_attr->device_type = RTE_PMD_QDMA_DEVICE_VERSAL_CPM4;
break;
case QDMA_DEVICE_VERSAL_CPM5:
dev_attr->device_type = RTE_PMD_QDMA_DEVICE_VERSAL_CPM5;
break;
default:
PMD_DRV_LOG(ERR, "%s: Invalid device type "
......
/*-
* BSD LICENSE
*
* Copyright(c) 2019-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2019-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -213,8 +213,10 @@ enum rte_pmd_qdma_xdebug_desc_type {
enum rte_pmd_qdma_device_type {
/** QDMA Soft device e.g. UltraScale+ IP's */
RTE_PMD_QDMA_DEVICE_SOFT,
/** QDMA Versal device */
RTE_PMD_QDMA_DEVICE_VERSAL,
/** QDMA Versal CPM4 device */
RTE_PMD_QDMA_DEVICE_VERSAL_CPM4,
/** QDMA Versal CPM5 device */
RTE_PMD_QDMA_DEVICE_VERSAL_CPM5,
/** Invalid QDMA device */
RTE_PMD_QDMA_DEVICE_NONE
};
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -36,9 +36,9 @@
#define qdma_stringify1(x...) #x
#define qdma_stringify(x...) qdma_stringify1(x)
#define QDMA_PMD_MAJOR 2020
#define QDMA_PMD_MINOR 2
#define QDMA_PMD_PATCHLEVEL 1
#define QDMA_PMD_MAJOR 2022
#define QDMA_PMD_MINOR 1
#define QDMA_PMD_PATCHLEVEL 0
#define QDMA_PMD_VERSION \
qdma_stringify(QDMA_PMD_MAJOR) "." \
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
DPDK_21 {
global:
......
# BSD LICENSE
#
# Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
# Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2010-2020 Intel Corporation. All rights reserved.
* Copyright(c) 2010-2022 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2021 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2020 Xilinx, Inc. All rights reserved.
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......
From cb0e7303150dcbb49c3aad88ac664b691612f1bc Mon Sep 17 00:00:00 2001
From 8d8204c48660cfe67ce27c95ce029e2cfa302958 Mon Sep 17 00:00:00 2001
From: Suryanarayana Raju Sangani <ssangani@xilinx.com>
Date: Thu, 27 Feb 2020 05:13:38 -0700
Subject: [PATCH] PKTGEN-20.12.0: Patch to add Jumbo packet support
......@@ -19,12 +19,12 @@ Signed-off-by: Suryanarayana Raju Sangani <ssangani@xilinx.com>
---
app/pktgen-cmds.c | 15 +++++++++++----
app/pktgen-constants.h | 3 ++-
app/pktgen-main.c | 21 +++++++++++++++++----
app/pktgen-main.c | 24 +++++++++++++++++-------
app/pktgen-port-cfg.c | 12 ++++++++----
app/pktgen-range.c | 3 ++-
app/pktgen.c | 14 ++++++++++++--
app/pktgen.c | 19 +++++++++++++++++--
app/pktgen.h | 4 +++-
7 files changed, 55 insertions(+), 17 deletions(-)
7 files changed, 60 insertions(+), 20 deletions(-)
diff --git a/app/pktgen-cmds.c b/app/pktgen-cmds.c
index 4da9bab..065fbe8 100644
......@@ -98,7 +98,7 @@ index ede4e65..8694e18 100644
};
#define DEFAULT_MBUF_SIZE (PG_ETHER_MAX_JUMBO_FRAME_LEN + RTE_PKTMBUF_HEADROOM) /* See: http://dpdk.org/dev/patchwork/patch/4479/ */
diff --git a/app/pktgen-main.c b/app/pktgen-main.c
index 96d1c0c..9c22278 100644
index 96d1c0c..1ca2941 100644
--- a/app/pktgen-main.c
+++ b/app/pktgen-main.c
@@ -188,7 +188,7 @@ pktgen_parse_args(int argc, char **argv)
......@@ -136,16 +136,16 @@ index 96d1c0c..9c22278 100644
signal(SIGSEGV, sig_handler);
signal(SIGHUP, sig_handler);
@@ -563,10 +570,16 @@ main(int argc, char **argv)
@@ -563,10 +570,13 @@ main(int argc, char **argv)
/* Wait for all of the cores to stop running and exit. */
rte_eal_mp_wait_lcore();
- RTE_ETH_FOREACH_DEV(i) {
- rte_eth_dev_stop(i);
- rte_delay_us_sleep(100 * 1000);
- rte_eth_dev_close(i);
+ nb_ports = rte_eth_dev_count_avail();
+ for(i = nb_ports-1; i >= 0; i--) {
rte_eth_dev_stop(i);
rte_delay_us_sleep(100 * 1000);
rte_eth_dev_close(i);
+ dev = rte_eth_devices[i].device;
+ if (rte_dev_remove(dev))
+ printf("Failed to detach port '%d'\n", i);
......@@ -214,7 +214,7 @@ index f88258d..bbaaa6f 100644
range->vxlan_gid = info->seq_pkt[SINGLE_PKT].group_id;
range->vxlan_gid_inc = 0;
diff --git a/app/pktgen.c b/app/pktgen.c
index 26cc80d..43790e0 100644
index 26cc80d..1042a47 100644
--- a/app/pktgen.c
+++ b/app/pktgen.c
@@ -74,6 +74,7 @@ pktgen_wire_size(port_info_t *info)
......@@ -225,7 +225,26 @@ index 26cc80d..43790e0 100644
return size;
}
@@ -912,6 +913,10 @@ pktgen_setup_cb(struct rte_mempool *mp,
@@ -296,6 +297,7 @@ pktgen_send_burst(port_info_t *info, uint16_t qid)
struct qstats_s *qstats;
uint32_t ret, cnt, tap, rnd, tstamp, i;
int32_t seq_idx;
+ pkt_seq_t *pkt;
if ((cnt = mtab->len) == 0)
return;
@@ -310,6 +312,10 @@ pktgen_send_burst(port_info_t *info, uint16_t qid)
else
seq_idx = SINGLE_PKT;
+ pkt = &info->seq_pkt[seq_idx];
+ for (i = 0; i < cnt; i++)
+ rte_pktmbuf_pkt_len(pkts[i]) = pkt->pktSize;
+
tap = pktgen_tst_port_flags(info, PROCESS_TX_TAP_PKTS);
rnd = pktgen_tst_port_flags(info, SEND_RANDOM_PKTS);
tstamp = pktgen_tst_port_flags(info, (SEND_LATENCY_PKTS | SEND_RATE_PACKETS | SAMPLING_LATENCIES));
@@ -912,6 +918,10 @@ pktgen_setup_cb(struct rte_mempool *mp,
pkt_seq_t *pkt;
uint16_t qid, idx;
......@@ -236,7 +255,7 @@ index 26cc80d..43790e0 100644
info = data->info;
qid = data->qid;
@@ -941,7 +946,7 @@ pktgen_setup_cb(struct rte_mempool *mp,
@@ -941,7 +951,7 @@ pktgen_setup_cb(struct rte_mempool *mp,
pktgen_packet_ctor(info, idx, -1);
rte_memcpy((uint8_t *)m->buf_addr + m->data_off,
......@@ -245,7 +264,7 @@ index 26cc80d..43790e0 100644
m->pkt_len = pkt->pktSize;
m->data_len = pkt->pktSize;
@@ -1150,7 +1155,7 @@ pktgen_main_receive(port_info_t *info, uint8_t lid,
@@ -1150,7 +1160,7 @@ pktgen_main_receive(port_info_t *info, uint8_t lid,
{
uint8_t pid;
uint16_t qid, nb_rx;
......@@ -254,7 +273,7 @@ index 26cc80d..43790e0 100644
struct qstats_s *qstats;
int i;
@@ -1169,6 +1174,10 @@ pktgen_main_receive(port_info_t *info, uint8_t lid,
@@ -1169,6 +1179,10 @@ pktgen_main_receive(port_info_t *info, uint8_t lid,
for(i = 0; i < nb_rx; i++)
qstats->rxbytes += rte_pktmbuf_data_len(pkts_burst[i]);
......@@ -265,7 +284,7 @@ index 26cc80d..43790e0 100644
pktgen_recv_tstamp(info, pkts_burst, nb_rx);
/* packets are not freed in the next call. */
@@ -1185,6 +1194,7 @@ pktgen_main_receive(port_info_t *info, uint8_t lid,
@@ -1185,6 +1199,7 @@ pktgen_main_receive(port_info_t *info, uint8_t lid,
}
rte_pktmbuf_free_bulk(pkts_burst, nb_rx);
......
......@@ -5,3 +5,37 @@ classification logic in dpdk-pktgen to remove application overhead in
performance measurement.
This patch is used for performance testing with dpdk-pktgen application.
/*-
* BSD LICENSE
*
* Copyright(c) 2017-2022 Xilinx, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of the copyright holder nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
......@@ -2,7 +2,7 @@ BSD License
For Xilinx DMA IP software
Copyright (c) 2016-2020 Xilinx, Inc. All rights reserved.
Copyright (c) 2016-2022 Xilinx, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
......
#/*
# * This file is part of the Xilinx DMA IP Core driver for Linux
# *
# * Copyright (c) 2017-2022, Xilinx, Inc.
# * All rights reserved.
# *
# * This source code is free software; you can redistribute it and/or modify it
# * under the terms and conditions of the GNU General Public License,
# * version 2, as published by the Free Software Foundation.
# *
# * This program is distributed in the hope that it will be useful, but WITHOUT
# * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# * more details.
# *
# * The full GNU General Public License is included in this distribution in
# * the file called "COPYING".
# */
SHELL = /bin/bash
#
......@@ -114,7 +134,7 @@ install-apps:
@install -v -m 755 bin/dma-latency $(apps_install_path)
@echo "MAN PAGES:"
@mkdir -p -m 755 $(docs_install_path)
@install -v -m 644 docs/dmactl.8.gz $(docs_install_path)
@install -v -m 644 docs/dma-ctl.8.gz $(docs_install_path)
.PHONY: install-dev
install-dev:
......
RELEASE: 2020.1 Patch
=====================
RELEASE: 2022.1
===============
This release is validated on QDMA4.0 2020.2 based example design and QDMA3.1 2020.2 based example design.
This release is validated for
- QDMA4.0 2020.1 Patch based example design
- XCVP1202 for CPM5 2022.1 example design
SUPPORTED FEATURES:
===================
......@@ -94,15 +96,55 @@ SUPPORTED FEATURES:
- Added support for MM Channel Selection and Keyhole feature in dmaperf
- Fixed bug in in driver which caused crash during performance run
2022.1 Updates
--------------
CPM5
- Tandem Boot support
- FMAP context dump
- Debug register dump for ST and MM Errors
- Dual Instance support
KNOWN ISSUES:
=============
- In interrupt mode, Sometimes completions are not received when C2H PIDX updates are held for 64 descriptors
- On QDMA4.0 2020.1 design onwards, HW error observed during the probe of the VFs
- With 2020.2 QDMA4.0 design, ST Performance design has performance drop for higher packet sizes
- CPM5 Only
- Sufficient host memory is required to accommodate 4K queues. Tested only upto 2099 queues with our test environment though driver supports 4K queues.
- All Designs
- In interrupt mode, Sometimes completions are not received when C2H PIDX updates are held for 64 descriptors
- On QDMA4.0 2020.1 design, HW error observed during the probe of the VFs
- With 2020.2 QDMA4.0 design, ST Performance design has performance drop for higher packet sizes
DRIVER LIMITATIONS:
===================
- Driver compilation on Fedora 28 with gcc8.1 results in compilation warnings
- Big endian systems are not supported
- For optimal QDMA streaming performance, packet buffers of the descriptor ring should be aligned to at least 256 bytes.
- FLR is not supported in the Driver for CentOS because linux kernels provided in CentOS versions does not support the driver call back registration for FLR functionality
- CPM5 Only
- VF functionality is verified with 240 VF's as per CPM5 HW limitation
- All Designs
- Driver compilation on Fedora 28 with gcc8.1 results in compilation warnings
- Big endian systems are not supported
- For optimal QDMA streaming performance, packet buffers of the descriptor ring should be aligned to at least 256 bytes.
- FLR is not supported in the Driver for CentOS because linux kernels provided in CentOS versions does not support the driver call back registration for FLR functionality
/*
* This file is part of the Xilinx DMA IP Core driver for Linux
*
* Copyright (c) 2017-2022, Xilinx, Inc.
* All rights reserved.
*
* This source code is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*/
#
#/*
# * This file is part of the QDMA userspace application
# * to enable the user to execute the QDMA functionality
# *
# * Copyright (c) 2018-2022, Xilinx, Inc.
# * All rights reserved.
# *
# * This source code is licensed under BSD-style license (found in the
# * LICENSE file in the root directory of this source tree)
# */
SHELL = /bin/bash
CFLAGS += -g
......
#
#/*
# * This file is part of the QDMA userspace application
# * to enable the user to execute the QDMA functionality
# *
# * Copyright (c) 2018-2022, Xilinx, Inc.
# * All rights reserved.
# *
# * This source code is licensed under BSD-style license (found in the
# * LICENSE file in the root directory of this source tree)
# */
SHELL = /bin/bash
CFLAGS += -g
......
......@@ -2,7 +2,7 @@
* This file is part of the QDMA userspace application
* to enable the user to execute the QDMA functionality
*
* Copyright (c) 2018-2020, Xilinx, Inc.
* Copyright (c) 2018-2022, Xilinx, Inc.
* All rights reserved.
*
* This source code is licensed under BSD-style license (found in the
......@@ -196,7 +196,7 @@ static void __attribute__((noreturn)) usage(FILE *fp)
{
fprintf(fp, "Usage: %s [dev|qdma[vf]<N>] [operation] \n", progname);
fprintf(fp, "\tdev [operation]: system wide FPGA operations\n");
fprintf(fp,
fprintf(fp,
"\t\tlist list all qdma functions\n");
fprintf(fp,
"\tqdma[N] [operation]: per QDMA FPGA operations\n");
......@@ -245,7 +245,9 @@ static void __attribute__((noreturn)) usage(FILE *fp)
fprintf(fp,
"\t\tintring dump vector <N> <start_idx> <end_idx> - interrupt ring dump for vector number <N> \n"
"\t\t for intrrupt entries :<start_idx> --- <end_idx>\n");
#ifdef TANDEM_BOOT_SUPPORTED
fprintf(fp, "\t\ten_st - enable streamig \n");
#endif
exit(fp == stderr ? 1 : 0);
}
......@@ -352,8 +354,8 @@ static int parse_reg_cmd(int argc, char *argv[], int i, struct xcmd_info *xcmd)
/*
* reg dump
* reg read [bar <N>] <addr>
* reg write [bar <N>] <addr> <val>
* reg read [bar <N>] <addr>
* reg write [bar <N>] <addr> <val>
*/
memset(regcmd, 0, sizeof(struct xcmd_reg));
......@@ -791,7 +793,7 @@ static int read_qparm(int argc, char *argv[], int i, struct xcmd_q_parm *qparm,
f_arg_set |= 1 << QPARM_C2H_BUFSZ_IDX;
i++;
} else if (!strcmp(argv[i], "idx_ringsz")) {
rv = next_arg_read_int(argc, argv, &i, &v1);
if (rv < 0)
......@@ -1082,7 +1084,7 @@ static int parse_q_cmd(int argc, char *argv[], int i, struct xcmd_info *xcmd)
printf("Error: Unknown q command\n");
return -EINVAL;
}
if (rv < 0)
return rv;
i = rv;
......@@ -1172,7 +1174,7 @@ int parse_cmd(int argc, char *argv[], struct xcmd_info *xcmd)
progname = argv[0];
if (argc == 1)
if (argc == 1)
usage(stderr);
if (argc == 2) {
......@@ -1216,13 +1218,17 @@ int parse_cmd(int argc, char *argv[], struct xcmd_info *xcmd)
} else if (!strcmp(argv[2], "cap")) {
rv = 3;
xcmd->op = XNL_CMD_DEV_CAP;
} else if (!strcmp(argv[2], "global_csr")) {
} else if (!strcmp(argv[2], "global_csr")) {
rv = 3;
xcmd->op = XNL_CMD_GLOBAL_CSR;
}
else if (!strcmp(argv[2], "info")) { /* not exposed. only for debug */
} else if (!strcmp(argv[2], "info")) { /* not exposed. only for debug */
rv = 3;
xcmd->op = XNL_CMD_DEV_INFO;
#ifdef TANDEM_BOOT_SUPPORTED
} else if (!strcmp(argv[2], "en_st")) {
rv = 3;
xcmd->op = XNL_CMD_EN_ST;
#endif
} else {
warnx("bad parameter \"%s\".\n", argv[2]);
return -EINVAL;
......@@ -1232,7 +1238,7 @@ done:
if (rv < 0)
return rv;
i = rv;
if (i < argc) {
warnx("unexpected parameter \"%s\".\n", argv[i]);
return -EINVAL;
......
......@@ -2,7 +2,7 @@
* This file is part of the QDMA userspace application
* to enable the user to execute the QDMA functionality
*
* Copyright (c) 2018-2020, Xilinx, Inc.
* Copyright (c) 2018-2022, Xilinx, Inc.
* All rights reserved.
*
* This source code is licensed under BSD-style license (found in the
......
This diff is collapsed.
......@@ -2,7 +2,7 @@
* This file is part of the QDMA userspace application
* to enable the user to execute the QDMA functionality
*
* Copyright (c) 2018-2020, Xilinx, Inc.
* Copyright (c) 2018-2022, Xilinx, Inc.
* All rights reserved.
*
* This source code is licensed under BSD-style license (found in the
......@@ -13,7 +13,7 @@
#define __DMA_CTL_VERSION_H
#define PROGNAME "dma-ctl"
#define VERSION "2020.2.0"
#define COPYRIGHT "Copyright (c) 2018-2020 Xilinx Inc."
#define VERSION "2022.1.0"
#define COPYRIGHT "Copyright (c) 2018-2022 Xilinx Inc."
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment