Unverified Commit 3b1d2665 authored by sujathabanoth-xlnx's avatar sujathabanoth-xlnx Committed by GitHub

Merge pull request #3 from sujathabanoth-xlnx/qdma_linx_doc_upload

Qdma linx doc upload
parents 6566b0ca f560e12d
This source diff could not be displayed because it is too large. You can view the blob instead.
cmake_minimum_required(VERSION 3.5)
set(LINUX_QDMA_SRC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/")
set(LINUX_QDMA_DOC_TOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/toc")
set(LINUX_QDMA_DOC_CORE_DIR "${CMAKE_CURRENT_BINARY_DIR}/core")
set(DOC_TOC_DIR "${CMAKE_CURRENT_BINARY_DIR}/doc_toc")
file(GLOB LIBQDMA_EXPORT_H ${LINUX_QDMA_SRC_DIR}/../../libqdma/libqdma_export.h)
file(MAKE_DIRECTORY ${LINUX_QDMA_DOC_CORE_DIR})
file(MAKE_DIRECTORY ${DOC_TOC_DIR})
#file(COPY ${LINUX_QDMA_DOC_TOC_DIR} DESTINATION ${CMAKE_CURRENT_BINARY_DIR})
set(KERNELDOC "./kernel-doc")
set(KERNELDOC_URL "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/plain/scripts/kernel-doc?h=v4.14.52")
MESSAGE(STATUS "${KERNELDOC} downloading")
file(DOWNLOAD ${KERNELDOC_URL} ${KERNELDOC})
execute_process(COMMAND chmod +x ${KERNELDOC})
find_program(KERNELDOC_EXECUTABLE ${KERNELDOC} PATHS "./")
find_program(SPHINX_EXECUTABLE sphinx-build)
if (NOT KERNELDOC_EXECUTABLE OR NOT SPHINX_EXECUTABLE)
MESSAGE (WARNING "kernel-doc or Sphinx not found, XRT documentation build disabled")
else ()
add_custom_command(OUTPUT core/libqdma_export.inc
MESSAGE(STATUS "Generating libqdma_export.inc , ${LIBQDMA_EXPORT_H}")
COMMAND ${KERNELDOC_EXECUTABLE} -rst ${LIBQDMA_EXPORT_H} > core/libqdma_export.inc
DEPENDS ${LIBQDMA_EXPORT_H}
VERBATIM
)
add_custom_target(
linux_qdma_docs ALL
COMMENT "Generating documentation with Sphinx"
DEPENDS core/libqdma_export.inc
COMMAND mkdir -p html
COMMAND rm -rf ${DOC_TOC_DIR}/*
COMMAND cp -rf ${LINUX_QDMA_DOC_TOC_DIR}/* ${DOC_TOC_DIR}/
COMMAND cp core/libqdma_export.inc ${DOC_TOC_DIR}
COMMAND ${SPHINX_EXECUTABLE} -a ${DOC_TOC_DIR} html
)
endif ()
###############################################################################
Xilinx QDMA Software Documentation Generation
###############################################################################
1. Installation:
Xilinx QDMA software documenation is designed based on Sphinx.
In oprder to generate the documenattion, make sure to install the sphinx
Sphinx software and CMAKE.
Follow the steps below to generate the documentation.
Go to linux/docs/git_doc and run the cmake
[xilinx@]# cmake .
make the documentaion
[xilinx@]# make linux_qdma_docs
\ No newline at end of file
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 662c884e464c4c1d3b7682e3e49da190
tags: 645f666f9bcd5a90fca523b33c5a78b7
Building and Installing Software Stack
======================================
For building the Linux QDMA Driver, make sure the system requiremets mentioned in :ref:`sys_req` are satisfied.
Update the PCIe device ID
--------------------------
During the PCIe DMA IP customization in Vivado you can specify a PCIe Device ID.
This Device ID must be recognized by the driver in order to properly recognize the PCIe QDMA device.
The current driver is designed to recognize the PCIe Device IDs that get generated with the PCIe example design when this value has not been modified.
If you have modified the PCIe Device ID during IP customization you will need to modify the PCIe driver to recognize this new ID.
You may also want to modify the driver to remove PCIe Device IDs that will not be used by your solution.
To modify the PCIe Device ID in the driver you should open the ``drv/pci_ids.h`` file and search for the pcie_device_id struct.
This struct identifies the PCIe Device IDs that are recognized by the driver in the following format:
{PCI_DEVICE (0x10ee, 0x9034),},
Add, remove, or modify the PCIe Device IDs in this struct as desired for your application.
The PCIe DMA driver will only recognize device IDs identified in this struct as PCIe QDMA devices.
Once modified, the driver must be uninstalled and recompiled.
Building the QDMA Driver Software
---------------------------------
This driver software supports both Physical Functions (PF) and Virtual Functions (VF).
In order to compile the Xilinx QDMA software, a configured and compiled Linux kernel source tree is required.
Linux QDMA driver is dependent on ``libaio``. Hence, make sure to install ``libaio`` before compiling the QDMA driver.
Example command is provided on Ubuntu. Follow the similar procedure for other OS flavours.
::
[xilinx@]# sudo apt-get install libaio libaio-devel
The source tree may be only header files, or a complete tree. The source tree needs to be configured and the header files need to be compiled.
The Linux kernel must be configured to use modules.
Linux QDMA Driver software database structure and its contents can befound on the Xilinx github https://github.com/Xilinx/dma_ip_drivers/tree/master/QDMA/linux-kernel
Once code is downloaded, compile the QDMA Driver software
[xilinx@]# make clean && make
Upon runing make, A sub-directory build/ will be created in ``linux-kernel`` with the list of executable listed in below tables.
Individual executables can also be built with commands listed aginst each component in below tables.
**Kernel modules:**
+-------------------+--------------------+---------------+
| Executable | Description | Command |
+===================+====================+===============+
| qdma.ko | PF Driver | make pf |
+-------------------+--------------------+---------------+
| qdma_vf.ko | VF Driver | make vf |
+-------------------+--------------------+---------------+
**Applications:**
+-------------------+--------------------------------------------------+--------------+
| Executable | Description | Command |
+===================+==================================================+==============+
| dmactl | QDMA control and configuration application | make user |
+-------------------+--------------------------------------------------+--------------+
| dma_to_device | Performs a host-to-card transaction for MM or ST | make tools |
+-------------------+--------------------------------------------------+--------------+
| dma_from_device | Performs a card-to-host transaction for MM or ST | make tools |
+-------------------+--------------------------------------------------+--------------+
| dmautils | Measures the performance of QDMA | make tools |
+-------------------+--------------------------------------------------+--------------+
Install the executable by running "make install"
- The QDMA module will be installed in ``/lib/modules/<linux_kernel_version>/updates/kernel/drivers/qdma`` directory.
- The ``dmactl``, ``dma_from_device`` and ``dma_to_device`` tools will be installed in ``/user/local/sbin``.
QDMA Driver Module Parameters
------------------------------
Before loading the QDMA driver, make sure that an intended board is connected to the Host System and required bitstream is flashed on to the board.
QDMA driver supports the following list of module parameters.
1. **Mode**
~~~~~~~~~~~
``mode`` module parameter is used to enable the qdma driver functionality in different modes.
Kernel module cane be loaded in following different modes
0. *Auto Mode*
Driver decides to process the request in poll or interrupt mode
1. *Poll Mode*
Driver process the requests using timer
2. *Direct Interrupt Mode*
Driver processes the requests using interrupts where each queue is assigned to a single vector
3. *Interrupt Aggregation Mode* or *Indirect Interrupt Mode*
Driver processes the requests using interrupts where all the queus corresponds to a function are assigned to a single vector. This vector is associated with a ring which holds the queue requets and upoin receiving the interrup, driver processes all the pending requeuest in the ring.
4. *Legacy Interrupt Mode*
Driver processes the requests using legacy interrupts
By default, mode is set to 0 and driver is loaded in auto mode
To load the driver in poll mode, use the below command.
Ex: insmod qdma.ko mode=1
To load the driver in direct interrupt mode, use the below command.
Ex: insmod qdma.ko mode=2
To load the driver in indirect interrupt mode, use the below command.
Ex: insmod qdma.ko mode=3
2. **Master PF**
~~~~~~~~~~~~~~~~
``master_pf`` module parameter is used to set the master pf for qdma driver
By default, ``master_pf`` is set to PF0(First device in the PF list)
To set any other PF as master_pf, use the module parameter as below
[xilinx@]# insmod qdma.ko master_pf=<pf_bdf_number>
[xilinx@]# lspci | grep Xilinx
01:00.1 Memory controller: Xilinx Corporation Device 913f
Ex: insmod qdma.ko master_pf=0x01001
When multiple devices are inserted in the same host system and master_pf needs to be updated for each device, use the command as below.
[xilinx@]# lspci | grep Xilinx
| 01:00.1 Memory controller: Xilinx Corporation Device 913f
| 02:00.1 Memory controller: Xilinx Corporation Device 913f
Ex: insmod qdma.ko master_pf=0x01001, 0x02001
3. **Dynamic Config Bar**
~~~~~~~~~~~~~~~~~~~~~~~~~
``config_bar`` module parameter is used to set the DMA bar of the QDMA device.
QDMA IP supports to dynamically change the DMA bar while creating the bit stream.
For 64-bit bars, DMA bar can resides in 0|2|4 bars.
By default the DMA bar is configured in bar#0 and QDMA driver also assumes the default DMA bar number as 0.
If the DMA bar is configured to be in bar#2 or bar#4, pass the config_bar as a module number by mentioning the updated bar number
config_bar takes the input as an array of 32 bit numbers and enables the user to mention the config_bar for multiple devices connected to the host system.
ex: 0x000Aabcd, 0x000Aabcd, 0x000Aabcd
Each 32bit number is devided as below for PF driver.
.. image:: /images/pf_configbar.PNG
:align: center
Each 32bit number is devided as below for VF driver.
.. image:: /images/vf_configbar.PNG
:align: center
Ex: Lets assume the host system has a single device connected and PF0 has config bar in bar#2, PF1 has the config bar in bar#0, PF2 has the config bar in bar#4 and PF3 has the config bar in bar#0
[xilinx@]# lspci | grep Xilinx
01:00.1 Memory controller: Xilinx Corporation Device 913f
[xilinx@]# insmod qdma.ko config_bar=0x00012040
When multiple devices are inserted in the same host system and config_bar needs to be updated for each device, use the command as below.
[xilinx@]# lspci | grep Xilinx
| 01:00.1 Memory controller: Xilinx Corporation Device 913f
| 02:00.1 Memory controller: Xilinx Corporation Device 913f
Ex: Lets assume the host system has two devices connected
- device#1 : PF0 has config bar in bar#2, PF1 has the config bar in bar#0, PF2 has the config bar in bar#4 and PF3 has the config bar in bar#0
- device#2 : PF0 has config bar in bar#4, PF1 has the config bar in bar#2, PF2 has the config bar in bar#0 and PF3 has the config bar in bar#2
Ex: insmod qdma.ko config_bar=0x00012040,0x00024202
4. **Enable Traffic Manager**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``tm_mode_en`` parameter is used to enable Traffic Manager mode in driver to test desc bypass functionality with Traffic Manager example design for ST H2C queue.
By default, tm_mode_en is set to 0.
To load driver with Traffic Manager mode enabled, use below command:
Ex. insmod qdma,ko tm_mode_en=1
NOTE: This parameter is experimental and should only be used only with Traffic Manager example design.
5. **Custom Defined Header**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``tm_one_cdh_en`` is used to test 1 CDH (Custom Defined Header) functionality with Traffic Manager example design when driver is loaded with tm_mode_en set to 1.
By default, tm_one_cdh_en is set to 0 indicating that driver will send pkts with Zero CDH.
To load driver with 1 CDH enabled, use below command:
Ex. insmod qdma.ko tm_mode_en=1 tm_one_cdh_en=1
NOTE: This parameter is experimental and should only be used only with Traffic Manager example design.
\ No newline at end of file
****************
Developers Guide
****************
.. toctree::
:maxdepth: 1
libqdma_apis.rst
qdma_design.rst
qdma_usecases.rst
This diff is collapsed.
QDMA Features
#############
QDMA Linux Driver supports the following list of features
QDMA Hardware Features
**********************
* SRIOV with 4 Physical Functions(PF) and 252 Virtual Functions(VF)
* AXI4 Memory Mapped(MM) and AXI4-Stream(ST) interfaces per queue
* 2048 queue sets
* 2048 H2C(Host-to-Card) descriptor rings
* 2048 C2H (Card-to-Host) descriptor rings
* 2048 completion rings
* Supports Legacy and MSI-X Interrupts
- MSI-X Interrupts
- 2048 MSI-X vectors.
- Up to 8 MSI-X per function.
- Interrupt Aggregation
- User Interrupts
- Error Interrupts
- Legacy Interupts
- Supported only for PF0 with single queue
* Flexible interrupt allocation between PF/VF
- Descriptor Prefetching
In ST C2H, HW supports to prefetch a set of descriptors prior to the availability of the data on the Stream Engine.
As the descriptors are prefetched in advance, it improves the performance.
* Different Queue Modes
A queue can be configured to operate in Descriptor Bypass mode or Internal Mode by setting the software context bypass field.
- Internal Mode:
In Internal Mode, descriptors that are fetched by the descriptor engine are delivered directly to the appropriate DMA engine and processed.
- Bypass Mode:
In Bypass Mode, descriptors fetched by the descriptor engine are delivered to user logic through the descriptor bypass interface.
This allows user logic to pre-process or store the descriptors, if desired.
On the bypass out interface, the descriptors can be a custom format (adhering to the descriptor size).
To perform DMA operations, the user logic drives descriptors into the descriptor bypass in interface.
For configurating the queue in different modes, user shall pass the appropriate configuration options as described below.
+--------------------------+-----------------------+----------------------------+----------------------------+
| SW_Desc_Ctxt.bypass[50] | Pftch_ctxt.bypass[0] | Pftch_ctxt.pftch_en[27] | Description |
+==========================+=======================+============================+============================+
| 1 | 1 | 0 | Simple Bypass Mode |
+--------------------------+-----------------------+----------------------------+----------------------------+
| 1 | 1 | 1 | Simple Bypass Mode |
+--------------------------+-----------------------+----------------------------+----------------------------+
| 1 | 0 | 0 | Cache Bypass Mode |
+--------------------------+-----------------------+----------------------------+----------------------------+
| 1 | 0 | 1 | Cache Bypass Mode |
+--------------------------+-----------------------+----------------------------+----------------------------+
| 0 | 0 | 0 | Cache Internal Mode |
+--------------------------+-----------------------+----------------------------+----------------------------+
| 0 | 0 | 1 | Cache Internal Mode |
+--------------------------+-----------------------+----------------------------+----------------------------+
- Simple Bypass Mode:
In Simple Bypass Mode, the engine does not track anything for the queue, and the user logic
can define its own method to receive descriptors. The user logic is then responsible for
delivering the packet and associated descriptor in simple bypass interface.
- Cached Bypass Mode:
In Cached Bypass Mode, the PFCH module offers storage for up to
512 descriptors and these descriptors can be used by up to 64 different queues. In this mode,
the engine controls the descriptors to be fetched by managing the C2H descriptor queue
credit on demand, based on received packets in the pipeline.
- Mailbox communication between PF and VF driver
- Zero byte transfers
In Stream H2C direction, The descriptors can have no buffer and the length field in a descriptor can be zero.
In this case, the H2C Stream Engine will issue a zero byte read request on PCIe.
After the QDMA receives the completion for the request, the H2C Stream Engine will send out one beat of data
with tlast on the QDMA H2C AXI Stream interface.
The user logic must set both the SOP and EOP for a zero byte descriptor.
If not done, an error will be flagged by the H2C Stream Engine.
- Immediate data transfers
In ST C2H direction, the completion Engine can be used to pass immediate data where the descriptor is not associated with any buffer
but the descriptor contents itself is treated as the data.Imeediate data can be of 8,16,32 and 64 bytes in sizes.
- ST C2H completion entry coalescing
- Function Level Reset(FLR) Support
- Disabling overflow check in completion ring
- ST H2C to C2H and C2H to H2C loopback support
- Driver configuration through sysfs
- ECC Support
- Completion queue descriptors of 64 bytes size
- Flexible BAR mapping for QDMA configuration register space
For mode details on Hardware Features refer to
.. _QDMA_PG: https://www.xilinx.com/support/documentation/ip_documentation/qdma/v3_0/pg302-qdma.pdf
QDMA Software Features
**********************
* Polling and Interrupt Modes
QDMA software provides 2 differnt drivers. PF driver for Physical functions and and VF driver for Virtual Functions.
PF and VF drivers can be inserted in differnt modes.
- Polling Mode
In Poll Mode, Sofware polls for the writeback completions(Status Descriptor Write Back)
- Direct Interrupt Mode
In Direct Interrupt moe, Each queue is assigned to one of the available interrupt vectors in a round robin fashion to service the requests.
Interrupt is raised by the HW upon receing the completions and software reads the completion status.
- Indirect Interrupt Mode
In Indirect Interrupt mode or Interrupt Aggregation mode, each vector has an associated Interrupt Aggregation Ring.
The QID and status of queues requiring service are written into the Interrupt Aggregation Ring.
When a PCIe MSI-X interrupt is received by the Host, the software reads the Interrupt Aggregation Ring to determine which queue needs service.
Mapping of queues to vectors is programmable
- Auto Mode
Auto mode is mix of Poll and Interrupt Aggregation mode. Driver polls for the writeback status updates.
Interrupt aggregation is used for processing the completion ring.
- Allows only Privileged Physical Functions to program the contexts and registers
- Dynamic queue configuration
- Dynamic driver configuration
- Asynchronous and Synchronous IO support
- Display the Version details for SW and HW
\ No newline at end of file
========================
Xilinx QDMA Linux Driver
========================
Xilinx QDMA Subsystem for PCIe example design is implemented on a Xilinx FPGA,
which is connected to an X86 host system through PCI Express.
Xilinx QDMA Linux Driver is implemented as a combination of userspace and
kernel driver components to control and configure the QDMA subsystem.
QDMA Linux Driver consists of the following three major components:
- **Device Control Tool**: Creates a netlink socket for PCIe device query, queue management, reading the context of queue etc.
- **DMA Tool**: User space application to initiate a DMA transaction.Standard Linux Applications like dd, aio or fio can be used to perform the DMA transctions.
- **Kernel Space Driver**: Creates the descriptors and translates the user space function into low-level command to interact with the FPGA device.
.. image:: /images/qdma_linux_driver_architecture.PNG
:align: center
----------------------------------------------------------------------------
.. toctree::
:maxdepth: 1
:caption: Table of Contents
features.rst
system-requirements.rst
build.rst
devguide.rst
userguide.rst
user-app.rst
performance.rst
\ No newline at end of file
************
Libqdma APIs
************
.. include:: libqdma_export.inc
\ No newline at end of file
QDMA Performance
----------------
Refer to QDMA_Performance_Answer_Record_ for more details.
.. _QDMA_Performance_Answer_Record: https://www.xilinx.com/support/answers/71453.html
\ No newline at end of file
**************************
QDMA Linux Driver UseCases
**************************
QDMA IP is released with five example designs in the Vivado® Design Suite. They are
#. AXI4 Memory Mapped And AXI-Stream with Completion
#. AXI Memory Mapped
#. AXI Memory Mapped with Completion
#. AXI Stream with Completion
#. Descriptor Bypass In/Out Loopback
Refer to QDMA Program Guide for more details on these example designs.
All the use cases described below are with respect to synchronous application requests and described in QDMA internal mode.
The driver functionality remains same for ``AXI4 Memory Mapped And AXI-Stream with Completion``, ``AXI Memory Mapped`` and ``AXI Stream with Completion`` example designs.
For ``Descriptor Bypass In/Out Loopback`` example design, application has to enable the bypass mode in driver.
Currently driver does not support ``AXI Memory Mapped with Completion`` example design.
====================================================
AXI4 Memory Mapped And AXI-Stream with Completion
====================================================
This is the default example design used to test the MM and ST functionality using QDMA driver.
This example design provides blocks to interface with the AXI4 Memory Mapped and AXI4-Stream interfaces.
- The AXI-MM interface is connected to 512 KBytes of block RAM(BRAM).
- The AXI4-Stream interface is connected to custom data generator and data checker module.
- The CMPT interface is connected to the Completion block generator.
- The data generator and checker works only with predefined pattern, which is a 16-bit incremental pattern starting with 0. This data file is included in driver package.
.. image:: /images/Example_Design_1.PNG
:align: center
The pattern generator and checker can be controlled using the registers found in the Example Design Registers. Refer to QDMA Program Guide for more details on these registers.
====================
MM H2C(Host-to-Card)
====================
This Example Design provides BRAM with AXI-MM interface to achieve the MM H2C functionality.
The current driver with dmactl tool and ``dma_to_device`` application helps achieve the MM H2C functionality and QDMA driver takes care of HW updations.
The complete flow between the Host components and HW components is depicted in below sequence diagram.
- User needs to start the queue in MM mode and H2C direction
- Pass the buffer to be transferred as an input to ``dma_to_device`` application which inturn passes it to the QDMA Driver
- QDMA driver devides buffer as 4KB chunks for each descriptor and programs the descriptors with buffer base address and updates the H2C ring PIDX
- Upon H2C ring PIDX update, DMA engine fetches the descriptors and passes them to H2C MM Engine for processing
- H2C MM Engine reads the buffer contents from the Host and writes to the BRAM
- Upon transfer completion, DMA Engine updates the CIDX in H2C ring completion status and generates interrupt if required. In poll mode, QDMA driver polls on CIDX update.
- QDMA driver processes the completion status and sends the response back to the application
.. image:: /images/MM_H2C_Flow.PNG
:align: center
====================
MM C2H(Card-to-Host)
====================
This Example Design provides BRAM with AXI-MM interface to achieve the MM C2H functionality.
The current driver with dmactl tool and dma_from_device application helps achieve the MM C2H functionality and QDMA driver takes care of HW updations.
The complete flow between the Host components and HW components is depicted in below sequence diagram.
- User needs to start the queue in MM mode and C2H direction
- Pass the buffer to copy the data as an input to ``dma_from_device`` application which inturn passes it to the QDMA Driver
- QDMA driver programs the required descriptors based on the length of the required buffer in multiples of 4KB chunks and programs the descriptors with buffer base address and updates the C2H ring PIDX
- Upon C2H ring PIDX update, DMA engine fetches the descriptors and passes them to C2H MM Engine for processing
- C2H MM Engine reads the BRAM contents writes to the Host buffers
- Upon transfer completion, DMA Engine updates the PIDX in C2H ring completion status and generates interrupt if required. In poll mode, QDMA driver polls on PIDX update.
- QDMA driver processes the completion status and sends the response back to the application with the data received.
.. image:: /images/MM_C2H_Flow.PNG
:align: center
====================
ST H2C(Host-to-Card)
====================
In ST H2C, data is moved from Host to Device through H2C stream engine.The H2C stream engine moves data from the Host to the H2C Stream interface. The engine is
responsible for breaking up DMA reads to MRRS size, guaranteeing the space for completions,
and also makes sure completions are reordered to ensure H2C stream data is delivered to user
logic in-order.The engine has sufficient buffering for up to 256 DMA reads and up to 32 KB of data. DMA
fetches the data and aligns to the first byte to transfer on the AXI4 interface side. This allows
every descriptor to have random offset and random length. The total length of all descriptors put
to gather must be less than 64 KB.
There is no dependency on user logic for this use case.
The complete flow between the Host components and HW components is depicted in below sequence diagram.
- User needs to start the queue in ST mode and H2C direction
- Pass the buffer to be transferred as an input to ``dma_to_device`` application which inturn passes it to the QDMA Driver
- QDMA driver devides buffer as 4KB chunks for each descriptor and programs the descriptors with buffer base address and updates the H2C ring PIDX
- Upon H2C ring PIDX update, DMA engine fetches the descriptors and passes them to H2C Stream Engine for processing
- H2C Stream Engine reads the buffer contents from the Host buffers the data
- Upon transfer completion, DMA Engine updates the CIDX in H2C ring completion status and generates interrupt if required. In poll mode, QDMA driver polls on CIDX update.
- QDMA driver processes the completion status and sends the response back to the application
.. image:: /images/ST_H2C_Flow.PNG
:align: center
====================
ST C2H(Card-to-Host)
====================
In ST C2H, data is moved from DMA Device to Host through C2H Stream Engine.
The C2H streaming engine is responsible for receiving data from the user logic and writing to the
Host memory address provided by the C2H descriptor for a given Queue.
The C2H Stream Engine, DMA writes the stream packets to the Host memory into the descriptors
provided by the Host QDMA driver through the C2H descriptor queue.
The C2H engine has two major blocks to accomplish C2H streaming DMA,
- Descriptor Prefetch Cache (PFCH)
- C2H-ST DMA Write Engine
QDMA Driver needs to programm the prefetch context along with the per queue context to achieve the ST C2H functionality.
The Prefetch Engine is responsible for calculating the number of descriptors needed for the DMA
that is writing the packet. The buffer size is fixed per queue basis. For internal and cached bypass
mode, the prefetch module can fetch up to 512 descriptors for a maximum of 64 different
queues at any given time.
The Completion Engine is used to write to the Completion queues.When used with a DMA engine, the
completion is used by the driver to determine how many bytes of data were transferred with
every packet. This allows the driver to reclaim the descriptors.
PFCH cache has three main modes, on a per queue basis, called
- Simple Bypass Mode
- Internal Cache Mode
- Cached Bypass Mode
While starting the queue in ST C2H mode using ``dmactl`` tool, user has the configuration options to configure
the queue in any of these 3 modes.
The complete flow between the Host components and HW components is depicted in below sequence diagram.
.. image:: /images/ST_C2H_Flow.PNG
:align: center
The current ST C2H functionality implemented in QDMA driver is tightly coupled with the Example Design.
Though the completion entry descriptor as per PG is fully configurable, this Example Design
madates to have the the color, error and desc_used bits in the first nibble.
The completion entry format is defined in QDMA Driver code base **libqdma/qdma_st_c2h.c**
::
struct cmpl_info {
/* cmpl entry stat bits */
union {
u8 fbits;
struct cmpl_flag {
u8 format:1;
u8 color:1;
u8 err:1;
u8 desc_used:1;
u8 eot:1;
u8 filler:3;
} f;
};
u8 rsvd;
u16 len;
/* for tracking */
unsigned int pidx;
__be64 *entry;
};
Completion entry is processed in ``parse_cmpl_entry()`` function which is part of **libqdma/qdma_st_c2h.c**.
If a different example design is opted, the QDMA driver code in **libqdma/qdma_st_c2h.h** and **libqdma/qdma_st_c2h.c** shall be updated to suit to the new example design.
.. _sys_req:
System Requirements
===================
Xilinx Accelerator Card
-----------------------
1. VCU1525
2. TULVU9P
Host Platform
-------------
1. x86_64
Host System Configuration
-------------------------
Linux QDMA Driver latest release is verified on following Host system configuration for PF anf VF functionality
+--------------------------+-------------------------------------------------------------+
| Host System | Configuration Details |
+==========================+=============================================================+
| Operating System | Ubuntu 16.04.3 LTS |
+--------------------------+-------------------------------------------------------------+
| Linux Kernel | 4.4.0-93-generic |
+--------------------------+-------------------------------------------------------------+
| RAM | 32GB |
+--------------------------+-------------------------------------------------------------+
| Qemu Version | QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.15)|
+--------------------------+-------------------------------------------------------------+
Guest System Configuration
--------------------------
Linux QDMA VF Driver latest release is verified on following Host system configuration for VF functionality
========================= ==================================
Guest System(VM) Configuration Details
========================= ==================================
Operating System Ubuntu 18.04 LTS
Linux Kernel 4.15.1-20-generic
RAM 4GB
Cores 4
========================= ==================================
Supported OS List
------------------
Linux QDMA Driver also supported on following OS and kernel versions
+-------------------------+-------------+----------------+
| Operating System | OS Version | Kernel Version |
+=========================+=============+================+
| CentOS | 7.2.1511 | |
+-------------------------+-------------+----------------+
|Fedora |24 |4.5.5-300 |
| +-------------+----------------+
| |26 |4.11.8-300 |
| +-------------+----------------+
| |27 |4.13.9-300 |
| +-------------+----------------+
| |28 |4.16.3-301 |
+-------------------------+-------------+----------------+
|Ubuntu |16.04 |4.4.0-93 |
| +-------------+----------------+
| |17.10.1 |4.13.0-21 |
| +-------------+----------------+
| |18.04 |4.15.0-23 |
| +-------------+----------------+
| |14.04.5 |3.10.0 |
+-------------------------+-------------+----------------+
Supported Kernel.org Version List
---------------------------------
Linux QDMA Driver verified on following kernel.org versions
+-------------------------+-----------------+
|Kernel.org | Kernel Version |
+=========================+=================+
| | 3.2.101 |
| +-----------------+
| | 3.18.108 |
| +-----------------+
| | 4.4.131 |
| +-----------------+
| | 4.14.40 |
| +-----------------+
| | 4.15.18 |
+-------------------------+-----------------+
The following kernel functions shall be included in the OS kernel being used. Make sure that these functions are included in the kernel.
- Timer Functions
- PCIe Functions
- Kernel Memory functions
- Kernel threads
- Memory and GFP Functions
User Applications
=================
.. include:: dmactl.rst
\ No newline at end of file
User Guide
==========
This section describes the details on controlling and configuring the QDMA IP
System Level Configurations
---------------------------
QDMA driver provides the sysfs interface to enable to user to perform system level configuations. QDMA ``PF`` and ``VF`` drivers expose several ``sysfs`` nodes under the ``pci`` device root node.
Once the qdma module is inserted and until any queue is added into the system and FMAP programming is not done, sysfs provides an interface to configure parameters for the module configuration.
[xilinx@]# lspci | grep -i Xilinx
| 01:00.0 Memory controller: Xilinx Corporation Device 903f
| 01:00.1 Memory controller: Xilinx Corporation Device 913f
| 01:00.2 Memory controller: Xilinx Corporation Device 923f
| 01:00.3 Memory controller: Xilinx Corporation Device 933f
Based on the above lspci output, traverse to ``/sys/bus/pci/devices/<device node>/qdma`` to find the list of configurable parameters specific to PF or VF driver.
1. **Instantiates the Virtual Functions**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
QDMA IP supports 252 Virtual Functions(VFs). ``/sys/bus/pci/devices/<device node>`` provides two configurable entries
- ``sriov_totalvfs`` : Indicates the maximum number of VFs supported for PF. This is a read only entry which can be configured during bit stream generation.
- ``sriov_numvfs`` : Enables the user to specify the number of VFs required for a PF
Display the currently supported max VFs:
[xilinx@]# cat /sys/bus/pci/devices/0000:01:00.0/sriov_totalvfs
Instantiate the required number of VFs for a PF:
[xilinx@]# echo 3 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs
Once the VFS are instantiated, required number of queues can be allocated the VF using ``qmax`` sysfs entry available in VF at
/sys/bus/pci/devices/<VF function number>/qdma/qmax
2. **Allocate the Queues to a function**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
QDMA IP supports maximum of 2048 queues. By default all the queues are equally distributed to the physical functions and each function gets 512 queues each.
``qmax`` configuration parameter enables the user to update the number of queues for a PF. This configuration parameter indicates "Maximum number of queues associated for the current pf".
If the queue allocation needs to be different for any PF, access the qmax sysfs entry and set the required number.
Once the number of queues for any PF is changed from the default value, the remaining set of queues among the 2048 queues are evenly distributed for the remaining PFs.
Display the current value:
[xilinx@]# cat /sys/bus/pci/devices/0000:01:00.0/qdma/qmax
Set a new value:
[xilinx@]# echo 1024 > /sys/bus/pci/devices/0000:01:00.0/qdma/qmax
Ex: Default queue sets for all PFs
[xilinx@]# dmactl dev list
| qdma01000 0000:01:00.0 max QP: 449, 0~448
| qdma01001 0000:01:00.1 max QP: 449, 449~897
| qdma01002 0000:01:00.2 max QP: 449, 898~1346
| qdma01003 0000:01:00.3 max QP: 449, 1347~1795
[xilinx@]# echo 1770 > /sys/bus/pci/devices/0000\:01\:00.0/qdma/qmax
[xilinx@]# dmactl dev list
| qdma01000 0000:01:00.0 max QP: 1770, 0~1769
| qdma01001 0000:01:00.1 max QP: 8, 1770~1777
| qdma01002 0000:01:00.2 max QP: 8, 1778~1785
| qdma01003 0000:01:00.3 max QP: 8, 1786~1793
``qmax`` configuration parameter is available for virtual functions as well. Once the ``qmax_vfs`` is configured, qmax for each VF can be updated from pool of queues assigned for the VFs.
3. **Reserve the Queues to VFs**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
QDMA IP supports 2048 queues and all the queues are allocated to PFs by default. From the set of 2048 queues, ``qmax_vfs`` configuration parameter enables the user to allocate a set of queues to the VFs.
This entry is available only for Master PF.
Before instantiating the VFs, allocate required number of queues for VFs from the available pool.
Assume that PF0 is the master PF.
Display the current value:
[xilinx@] #cat /sys/bus/pci/devices/0000:81:00.0/qdma/qmax_vfs
Set a new value:
[xilinx@] #echo 1024 > /sys/bus/pci/devices/0000:81:00.0/qdma/qmax_vfs
4. **Set Interrupt Ring Size**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Interrupt ring size is associated with indirect interrupt mode.
When the module is inserted in indirect interrupt mode, by default the interrupt aggregation ring size is set 0 i.e 512 entries
User can configure he interrupt ring entries in multiples of 512 hence set the ``intr_rngsz`` with multiplication factor
| 0 - INTR_RING_SZ_4KB, Accommodates 512 entries
| 1 - INTR_RING_SZ_8KB, Accommodates 1024 entries
| 2 - INTR_RING_SZ_12KB, Accommodates 1536 entries
| 3 - INTR_RING_SZ_16KB, Accommodates 2048 entries
| 4 - INTR_RING_SZ_20KB, Accommodates 2560 entries
| 5 - INTR_RING_SZ_24KB, Accommodates 3072 entries
| 6 - INTR_RING_SZ_24KB, Accommodates 3584 entries
| 7 - INTR_RING_SZ_24KB, Accommodates 4096 entries
Display the current value:
[xilinx@]# cat /sys/bus/pci/devices/0000:81:00.0/qdma/intr_rngsz
Set a new value:
[xilinx@]# echo 2 > /sys/bus/pci/devices/0000:81:00.0/qdma/intr_rngsz
5. **Set Completion Interval**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``cmpt_intrvl`` indicated the interval at which completions are generated for an MM or H2C Stream queue running in non-bypass mode.
User can set any of the following list of values for this configuration parameter.
| 3'h0: 4
| 3'h1: 8
| 3'h2: 16
| 3'h3: 32
| 3'h4: 64
| 3'h5: 128
| 3'h6: 256
| 3'h7: 512
Completion accumulation value is calculated as 2^(register bit [2:0]). Maximum accumulation is 512.
Accumulation can be disabled via queue context.
Display the current value:
[xilinx@]# cat /sys/bus/pci/devices/0000:81:00.0/qdma/cmpt_intrvl
Set a new value:
[xilinx@]# echo 2 > /sys/bus/pci/devices/0000:81:00.0/qdma/cmpt_intrvl
Queue Management
----------------
QDMA driver comes with a command-line configuration utility called ``dmactl`` to manage the queues in the system.
This diff is collapsed.
.fa:before{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:before,.clearfix:after{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-weight:normal;font-style:normal;src:url("../font/fontawesome_webfont.eot");src:url("../font/fontawesome_webfont.eot?#iefix") format("embedded-opentype"),url("../font/fontawesome_webfont.woff") format("woff"),url("../font/fontawesome_webfont.ttf") format("truetype"),url("../font/fontawesome_webfont.svg#FontAwesome") format("svg")}.fa:before{display:inline-block;font-family:FontAwesome;font-style:normal;font-weight:normal;line-height:1;text-decoration:inherit}a .fa{display:inline-block;text-decoration:inherit}li .fa{display:inline-block}li .fa-large:before,li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-0.8em}ul.fas li .fa{width:0.8em}ul.fas li .fa-large:before,ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before{content:""}.icon-book:before{content:""}.fa-caret-down:before{content:""}.icon-caret-down:before{content:""}.fa-caret-up:before{content:""}.icon-caret-up:before{content:""}.fa-caret-left:before{content:""}.icon-caret-left:before{content:""}.fa-caret-right:before{content:""}.icon-caret-right:before{content:""}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;border-top:solid 10px #343131;font-family:"Lato","proxima-nova","Helvetica Neue",Arial,sans-serif;z-index:400}.rst-versions a{color:#2980B9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27AE60;*zoom:1}.rst-versions .rst-current-version:before,.rst-versions .rst-current-version:after{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book{float:left}.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#E74C3C;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#F1C40F;color:#000}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:gray;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:solid 1px #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px}.rst-versions.rst-badge .icon-book{float:none}.rst-versions.rst-badge .fa-book{float:none}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book{float:left}.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge .rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width: 768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}img{width:100%;height:auto}}
/*# sourceMappingURL=badge_only.css.map */
This source diff could not be displayed because it is too large. You can view the blob instead.
/*
* doctools.js
* ~~~~~~~~~~~
*
* Sphinx JavaScript utilities for all documentation.
*
* :copyright: Copyright 2007-2018 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/**
* select a different prefix for underscore
*/
$u = _.noConflict();
/**
* make the code below compatible with browsers without
* an installed firebug like debugger
if (!window.console || !console.firebug) {
var names = ["log", "debug", "info", "warn", "error", "assert", "dir",
"dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace",
"profile", "profileEnd"];
window.console = {};
for (var i = 0; i < names.length; ++i)
window.console[names[i]] = function() {};
}
*/
/**
* small helper function to urldecode strings
*/
jQuery.urldecode = function(x) {
return decodeURIComponent(x).replace(/\+/g, ' ');
};
/**
* small helper function to urlencode strings
*/
jQuery.urlencode = encodeURIComponent;
/**
* This function returns the parsed url parameters of the
* current request. Multiple values per key are supported,
* it will always return arrays of strings for the value parts.
*/
jQuery.getQueryParameters = function(s) {
if (typeof s === 'undefined')
s = document.location.search;
var parts = s.substr(s.indexOf('?') + 1).split('&');
var result = {};
for (var i = 0; i < parts.length; i++) {
var tmp = parts[i].split('=', 2);
var key = jQuery.urldecode(tmp[0]);
var value = jQuery.urldecode(tmp[1]);
if (key in result)
result[key].push(value);
else
result[key] = [value];
}
return result;
};
/**
* highlight a given string on a jquery object by wrapping it in
* span elements with the given class name.
*/
jQuery.fn.highlightText = function(text, className) {
function highlight(node, addItems) {
if (node.nodeType === 3) {
var val = node.nodeValue;
var pos = val.toLowerCase().indexOf(text);
if (pos >= 0 &&
!jQuery(node.parentNode).hasClass(className) &&
!jQuery(node.parentNode).hasClass("nohighlight")) {
var span;
var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg");
if (isInSVG) {
span = document.createElementNS("http://www.w3.org/2000/svg", "tspan");
} else {
span = document.createElement("span");
span.className = className;
}
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
node.parentNode.insertBefore(span, node.parentNode.insertBefore(
document.createTextNode(val.substr(pos + text.length)),
node.nextSibling));
node.nodeValue = val.substr(0, pos);
if (isInSVG) {
var bbox = span.getBBox();
var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect");
rect.x.baseVal.value = bbox.x;
rect.y.baseVal.value = bbox.y;
rect.width.baseVal.value = bbox.width;
rect.height.baseVal.value = bbox.height;
rect.setAttribute('class', className);
var parentOfText = node.parentNode.parentNode;
addItems.push({
"parent": node.parentNode,
"target": rect});
}
}
}
else if (!jQuery(node).is("button, select, textarea")) {
jQuery.each(node.childNodes, function() {
highlight(this, addItems);
});
}
}
var addItems = [];
var result = this.each(function() {
highlight(this, addItems);
});
for (var i = 0; i < addItems.length; ++i) {
jQuery(addItems[i].parent).before(addItems[i].target);
}
return result;
};
/*
* backward compatibility for jQuery.browser
* This will be supported until firefox bug is fixed.
*/
if (!jQuery.browser) {
jQuery.uaMatch = function(ua) {
ua = ua.toLowerCase();
var match = /(chrome)[ \/]([\w.]+)/.exec(ua) ||
/(webkit)[ \/]([\w.]+)/.exec(ua) ||
/(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) ||
/(msie) ([\w.]+)/.exec(ua) ||
ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) ||
[];
return {
browser: match[ 1 ] || "",
version: match[ 2 ] || "0"
};
};
jQuery.browser = {};
jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true;
}
/**
* Small JavaScript module for the documentation.
*/
var Documentation = {
init : function() {
this.fixFirefoxAnchorBug();
this.highlightSearchWords();
this.initIndexTable();
if (DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) {
this.initOnKeyListeners();
}
},
/**
* i18n support
*/
TRANSLATIONS : {},
PLURAL_EXPR : function(n) { return n === 1 ? 0 : 1; },
LOCALE : 'unknown',
// gettext and ngettext don't access this so that the functions
// can safely bound to a different name (_ = Documentation.gettext)
gettext : function(string) {
var translated = Documentation.TRANSLATIONS[string];
if (typeof translated === 'undefined')
return string;
return (typeof translated === 'string') ? translated : translated[0];
},
ngettext : function(singular, plural, n) {
var translated = Documentation.TRANSLATIONS[singular];
if (typeof translated === 'undefined')
return (n == 1) ? singular : plural;
return translated[Documentation.PLURALEXPR(n)];
},
addTranslations : function(catalog) {
for (var key in catalog.messages)
this.TRANSLATIONS[key] = catalog.messages[key];
this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')');
this.LOCALE = catalog.locale;
},
/**
* add context elements like header anchor links
*/
addContextElements : function() {
$('div[id] > :header:first').each(function() {
$('<a class="headerlink">\u00B6</a>').
attr('href', '#' + this.id).
attr('title', _('Permalink to this headline')).
appendTo(this);
});
$('dt[id]').each(function() {
$('<a class="headerlink">\u00B6</a>').
attr('href', '#' + this.id).
attr('title', _('Permalink to this definition')).
appendTo(this);
});
},
/**
* workaround a firefox stupidity
* see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075
*/
fixFirefoxAnchorBug : function() {
if (document.location.hash && $.browser.mozilla)
window.setTimeout(function() {
document.location.href += '';
}, 10);
},
/**
* highlight the search words provided in the url in the text
*/
highlightSearchWords : function() {
var params = $.getQueryParameters();
var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : [];
if (terms.length) {
var body = $('div.body');
if (!body.length) {
body = $('body');
}
window.setTimeout(function() {
$.each(terms, function() {
body.highlightText(this.toLowerCase(), 'highlighted');
});
}, 10);
$('<p class="highlight-link"><a href="javascript:Documentation.' +
'hideSearchWords()">' + _('Hide Search Matches') + '</a></p>')
.appendTo($('#searchbox'));
}
},
/**
* init the domain index toggle buttons
*/
initIndexTable : function() {
var togglers = $('img.toggler').click(function() {
var src = $(this).attr('src');
var idnum = $(this).attr('id').substr(7);
$('tr.cg-' + idnum).toggle();
if (src.substr(-9) === 'minus.png')
$(this).attr('src', src.substr(0, src.length-9) + 'plus.png');
else
$(this).attr('src', src.substr(0, src.length-8) + 'minus.png');
}).css('display', '');
if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) {
togglers.click();
}
},
/**
* helper function to hide the search marks again
*/
hideSearchWords : function() {
$('#searchbox .highlight-link').fadeOut(300);
$('span.highlighted').removeClass('highlighted');
},
/**
* make the url absolute
*/
makeURL : function(relativeURL) {
return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL;
},
/**
* get the current relative url
*/
getCurrentURL : function() {
var path = document.location.pathname;
var parts = path.split(/\//);
$.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() {
if (this === '..')
parts.pop();
});
var url = parts.join('/');
return path.substring(url.lastIndexOf('/') + 1, path.length - 1);
},
initOnKeyListeners: function() {
$(document).keyup(function(event) {
var activeElementType = document.activeElement.tagName;
// don't navigate when in search box or textarea
if (activeElementType !== 'TEXTAREA' && activeElementType !== 'INPUT' && activeElementType !== 'SELECT') {
switch (event.keyCode) {
case 37: // left
var prevHref = $('link[rel="prev"]').prop('href');
if (prevHref) {
window.location.href = prevHref;
return false;
}
case 39: // right
var nextHref = $('link[rel="next"]').prop('href');
if (nextHref) {
window.location.href = nextHref;
return false;
}
}
}
});
}
};
// quick alias for translations
_ = Documentation.gettext;
$(document).ready(function() {
Documentation.init();
});
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
require=(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({"sphinx-rtd-theme":[function(require,module,exports){
var jQuery = (typeof(window) != 'undefined') ? window.jQuery : require('jquery');
// Sphinx theme nav state
function ThemeNav () {
var nav = {
navBar: null,
win: null,
winScroll: false,
winResize: false,
linkScroll: false,
winPosition: 0,
winHeight: null,
docHeight: null,
isRunning: null
};
nav.enable = function () {
var self = this;
jQuery(function ($) {
self.init($);
self.reset();
self.win.on('hashchange', self.reset);
// Set scroll monitor
self.win.on('scroll', function () {
if (!self.linkScroll) {
self.winScroll = true;
}
});
setInterval(function () { if (self.winScroll) self.onScroll(); }, 25);
// Set resize monitor
self.win.on('resize', function () {
self.winResize = true;
});
setInterval(function () { if (self.winResize) self.onResize(); }, 25);
self.onResize();
});
};
nav.init = function ($) {
var doc = $(document),
self = this;
this.navBar = $('div.wy-side-scroll:first');
this.win = $(window);
// Set up javascript UX bits
$(document)
// Shift nav in mobile when clicking the menu.
.on('click', "[data-toggle='wy-nav-top']", function() {
$("[data-toggle='wy-nav-shift']").toggleClass("shift");
$("[data-toggle='rst-versions']").toggleClass("shift");
})
// Nav menu link click operations
.on('click', ".wy-menu-vertical .current ul li a", function() {
var target = $(this);
// Close menu when you click a link.
$("[data-toggle='wy-nav-shift']").removeClass("shift");
$("[data-toggle='rst-versions']").toggleClass("shift");
// Handle dynamic display of l3 and l4 nav lists
self.toggleCurrent(target);
self.hashChange();
})
.on('click', "[data-toggle='rst-current-version']", function() {
$("[data-toggle='rst-versions']").toggleClass("shift-up");
})
// Make tables responsive
$("table.docutils:not(.field-list)")
.wrap("<div class='wy-table-responsive'></div>");
// Add expand links to all parents of nested ul
$('.wy-menu-vertical ul').not('.simple').siblings('a').each(function () {
var link = $(this);
expand = $('<span class="toctree-expand"></span>');
expand.on('click', function (ev) {
self.toggleCurrent(link);
ev.stopPropagation();
return false;
});
link.prepend(expand);
});
};
nav.reset = function () {
// Get anchor from URL and open up nested nav
var anchor = encodeURI(window.location.hash);
if (anchor) {
try {
var link = $('.wy-menu-vertical')
.find('[href="' + anchor + '"]');
$('.wy-menu-vertical li.toctree-l1 li.current')
.removeClass('current');
link.closest('li.toctree-l2').addClass('current');
link.closest('li.toctree-l3').addClass('current');
link.closest('li.toctree-l4').addClass('current');
}
catch (err) {
console.log("Error expanding nav for anchor", err);
}
}
};
nav.onScroll = function () {
this.winScroll = false;
var newWinPosition = this.win.scrollTop(),
winBottom = newWinPosition + this.winHeight,
navPosition = this.navBar.scrollTop(),
newNavPosition = navPosition + (newWinPosition - this.winPosition);
if (newWinPosition < 0 || winBottom > this.docHeight) {
return;
}
this.navBar.scrollTop(newNavPosition);
this.winPosition = newWinPosition;
};
nav.onResize = function () {
this.winResize = false;
this.winHeight = this.win.height();
this.docHeight = $(document).height();
};
nav.hashChange = function () {
this.linkScroll = true;
this.win.one('hashchange', function () {
this.linkScroll = false;
});
};
nav.toggleCurrent = function (elem) {
var parent_li = elem.closest('li');
parent_li.siblings('li.current').removeClass('current');
parent_li.siblings().find('li.current').removeClass('current');
parent_li.find('> ul li.current').removeClass('current');
parent_li.toggleClass('current');
}
return nav;
};
module.exports.ThemeNav = ThemeNav();
if (typeof(window) != 'undefined') {
window.SphinxRtdTheme = { StickyNav: module.exports.ThemeNav };
}
},{"jquery":"jquery"}]},{},["sphinx-rtd-theme"]);
.highlight .hll { background-color: #ffffcc }
.highlight { background: #ffffff; }
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment