How does the XDMA engine ( Xilinx PCI Express DMA 4.1 Ultrascale+ KCU116 board) know the size of the transfer? I have a 64 bit (8 byte wide) system which does 64 transfers. i.e there should be 512 bytes sent in one packet. I assert TLAST for the last beat. See ILA picture:
I issue the command :
./dma_from_device -v -c 1 -s 512 -f output.dat
However, the driver constantly complains that it only gets 1 byte:
dev /dev/xdma0_c2h_0, addr 0x0, size 0x200, offset 0x0, count 1
host buffer 0x1200, 0x556900049000.
/dev/xdma0_c2h_0, R off 0x0, 0x1 != 0x200.
read file: Success
#0: CLOCK_MONOTONIC 0.005002687 sec. read 512 bytes
** Avg time device /dev/xdma0_c2h_0, total time 5002687 nsec, avg_time = 5002687.000000, size = 512, BW = 0.102345
** Average BW = 512, 0.102345
How does it calculate the value of one byte? I had assumed it counted cycles until TLAST, which you can see should give 512 bytes. I have tried sizes from 1 word (8 bytes) to much larger and I cannot get the size to match. However, with loop back (H2C connected to C2H) it works. Is there sideband information set somewhere? I have looked at status registers, but they are all clear (no errors). I have attached the dmesg log for both the example design with loopback and for my design with an AXIS Master core connected to the XDMA.