1. 23 Oct, 2020 3 commits
  2. 07 Oct, 2020 4 commits
  3. 28 Sep, 2020 1 commit
  4. 22 Sep, 2020 4 commits
  5. 15 Sep, 2020 1 commit
    • Julien's avatar
      libxdma: fix next adjacent descriptors · f1e834be
      Julien authored
      Fix the setting of the next adjacent fields in descriptors.
      
      Following commit 5faf23ec the next_adj field of all descriptors is set
      according to the index of the descriptor rather than its address which
      causes issues when dma_alloc_coherent doesn't return an address which is
      page aligned (which happens).
      Moreover, in the case of a transfer which number of descriptors is
      bigger than a full page, the next_adj field is set to the maximum (63)
      for all descriptors untill the last page of descriptors where it starts
      decreasing.
      Last, even before this commit, the next_adj field inside a block of
      adjacent descriptors is not decreasing untill coming near page end,
      which is not compliant with what the documentation says :
      
      "Every descriptor in the descriptor list must accurately describe the descriptor
      or block ofdescriptors that follows. In a block of adjacent descriptors, the
      Nxt_adj value decrements from the first descriptor to the second to last
      descriptor which has a value of zero. Likewise, eachdescriptor in the block
      points to the next descriptor in the block, except for the last descriptor
      which might point to a new block or might terminate the list."
      
      This commit aligns the blocks of adjacent descriptors to
      XDMA_MAX_ADJ_BLOCK_SIZE and makes the next_adj field decrease inside
      each block untill the second to last descriptor in the block or in the
      full transfer. The size of the page being a multiple of the size of the
      block (4096 = sizeof(xdma_desc) * 128 =
      sizeof(xdma_desc) * 2 * XDMA_MAX_ADJ_BLOCK_SIZE
      f1e834be
  6. 08 Sep, 2020 1 commit
  7. 01 Sep, 2020 2 commits
  8. 25 Aug, 2020 5 commits
  9. 19 Aug, 2020 1 commit
    • Jessica Clarke's avatar
      XDMA: Mark engine as not running when stopping after timeouts · cf3611e0
      Jessica Clarke authored
      Unlike engine_start, which sets the engine's running field itself,
      engine_stop instead requires its callers to clear the field. However,
      not all of them do, and notably the timeout handlers do not, meaning
      that after a request times out and we stop the engine we never start it
      again, causing all future transfers on the channel to hit the software
      timeout.
      
      Instead, set running to 0 inside xdma_engine_stop to mirror engine_start
      to both fix the bug and prevent future ones creeping in.
      cf3611e0
  10. 18 Aug, 2020 1 commit
  11. 12 Aug, 2020 1 commit
  12. 24 Jul, 2020 1 commit
  13. 30 Jun, 2020 3 commits
  14. 23 Jun, 2020 1 commit
  15. 22 Jun, 2020 2 commits
  16. 16 Jun, 2020 1 commit
  17. 09 Jun, 2020 1 commit
    • Julien's avatar
      xdma_thread: fix cpu node bug · 7fc246ee
      Julien authored
      The number of threads is arbitrarily set to 8 and we iterate over 8 cpus
      without knowing if we really have 8 cpus. This causes a call to
      cpu_to_node with an index that is out of bound, thus returning an
      undefined number, which crashes later when calling
      kthread_create_on_node.
      Fix this by iterating over the online cpus stopping when reaching the
      number of threads that is specified.
      7fc246ee
  18. 19 May, 2020 1 commit
  19. 27 Feb, 2020 3 commits
  20. 24 Feb, 2020 1 commit
    • Bryce Hathaway's avatar
      Fixes a design error revealed by running on aarch64. · d1f334b1
      Bryce Hathaway authored
      Simply put, you cannot call wait_event_interruptible_timeout while
      holding a spin_lock. The reason the driver got away with this on
      x86 is because the hardware is somehow fast enough so that the
      condition is never false and hence no scheduling ever needs to
      occur.
      d1f334b1
  21. 03 Feb, 2020 1 commit
  22. 21 Jan, 2020 1 commit