1. 07 Oct, 2020 3 commits
  2. 28 Sep, 2020 1 commit
  3. 22 Sep, 2020 3 commits
  4. 15 Sep, 2020 1 commit
    • Julien's avatar
      libxdma: fix next adjacent descriptors · f1e834be
      Julien authored
      Fix the setting of the next adjacent fields in descriptors.
      
      Following commit 5faf23ec the next_adj field of all descriptors is set
      according to the index of the descriptor rather than its address which
      causes issues when dma_alloc_coherent doesn't return an address which is
      page aligned (which happens).
      Moreover, in the case of a transfer which number of descriptors is
      bigger than a full page, the next_adj field is set to the maximum (63)
      for all descriptors untill the last page of descriptors where it starts
      decreasing.
      Last, even before this commit, the next_adj field inside a block of
      adjacent descriptors is not decreasing untill coming near page end,
      which is not compliant with what the documentation says :
      
      "Every descriptor in the descriptor list must accurately describe the descriptor
      or block ofdescriptors that follows. In a block of adjacent descriptors, the
      Nxt_adj value decrements from the first descriptor to the second to last
      descriptor which has a value of zero. Likewise, eachdescriptor in the block
      points to the next descriptor in the block, except for the last descriptor
      which might point to a new block or might terminate the list."
      
      This commit aligns the blocks of adjacent descriptors to
      XDMA_MAX_ADJ_BLOCK_SIZE and makes the next_adj field decrease inside
      each block untill the second to last descriptor in the block or in the
      full transfer. The size of the page being a multiple of the size of the
      block (4096 = sizeof(xdma_desc) * 128 =
      sizeof(xdma_desc) * 2 * XDMA_MAX_ADJ_BLOCK_SIZE
      f1e834be
  5. 08 Sep, 2020 1 commit
  6. 01 Sep, 2020 2 commits
  7. 25 Aug, 2020 5 commits
  8. 19 Aug, 2020 1 commit
    • Jessica Clarke's avatar
      XDMA: Mark engine as not running when stopping after timeouts · cf3611e0
      Jessica Clarke authored
      Unlike engine_start, which sets the engine's running field itself,
      engine_stop instead requires its callers to clear the field. However,
      not all of them do, and notably the timeout handlers do not, meaning
      that after a request times out and we stop the engine we never start it
      again, causing all future transfers on the channel to hit the software
      timeout.
      
      Instead, set running to 0 inside xdma_engine_stop to mirror engine_start
      to both fix the bug and prevent future ones creeping in.
      cf3611e0
  9. 18 Aug, 2020 1 commit
  10. 12 Aug, 2020 1 commit
  11. 24 Jul, 2020 1 commit
  12. 30 Jun, 2020 3 commits
  13. 23 Jun, 2020 1 commit
  14. 22 Jun, 2020 2 commits
  15. 16 Jun, 2020 1 commit
  16. 09 Jun, 2020 1 commit
    • Julien's avatar
      xdma_thread: fix cpu node bug · 7fc246ee
      Julien authored
      The number of threads is arbitrarily set to 8 and we iterate over 8 cpus
      without knowing if we really have 8 cpus. This causes a call to
      cpu_to_node with an index that is out of bound, thus returning an
      undefined number, which crashes later when calling
      kthread_create_on_node.
      Fix this by iterating over the online cpus stopping when reaching the
      number of threads that is specified.
      7fc246ee
  17. 19 May, 2020 1 commit
  18. 27 Feb, 2020 3 commits
  19. 24 Feb, 2020 1 commit
    • Bryce Hathaway's avatar
      Fixes a design error revealed by running on aarch64. · d1f334b1
      Bryce Hathaway authored
      Simply put, you cannot call wait_event_interruptible_timeout while
      holding a spin_lock. The reason the driver got away with this on
      x86 is because the hardware is somehow fast enough so that the
      condition is never false and hence no scheduling ever needs to
      occur.
      d1f334b1
  20. 03 Feb, 2020 1 commit
  21. 21 Jan, 2020 1 commit
  22. 20 Jan, 2020 1 commit
  23. 17 Jan, 2020 1 commit
  24. 03 Jan, 2020 1 commit
  25. 23 Dec, 2019 2 commits