07-04-2019 07:29 AM
Hello,
I am trying to build a axi-stream based co-processor. I want to accelerate a parallizeable algorithm, and I have succesfully implemented and tested my custom verilog based AXI stream IP with a typical DMA-interrupt and through a single HP port. Now, I want to increase parallelism in the co-processor by adding more AXI-Streams. I have found that there are multiple ways to do this.
1. Use multiple DMAs
2. Use Mulitple Channel DMA IP (MCDMA)
My first question, is there a diffrence in performance in both implementations? Say 2 DMAs and 2 channel MCDMA.
I have also read through this presentation about Co-accelaretor architecture
This presentations infers that it recommends to use one HP port and 2 DMAs to feed into 2 accelerators. My question is
1 HP port for multiple DMAs or an HP port for each DMA?
Why and whats the diffrence?
07-08-2019 10:02 AM
Hi @atmosfir,
If you are trying to maximize bandwidth, you are better off using AXI DMA over AXI MCDMA. The reason is that while MCDMA has a nicer register interface to support multiple channels, those multiple channels all share a single stream/memory mapped AXI interface. If you use two AXI DMAs, both channels have a dedicated stream/memory mapped interface.
Similarly, if you need a lot of bandwidth, you should dedicate AXI_DMA0 to AXI_HP0 and AXI_DMA1 to AXI_HP1. This way you have dedicated parallel channels for each AXI DMA.
Regards,
Deanna
07-08-2019 10:02 AM
Hi @atmosfir,
If you are trying to maximize bandwidth, you are better off using AXI DMA over AXI MCDMA. The reason is that while MCDMA has a nicer register interface to support multiple channels, those multiple channels all share a single stream/memory mapped AXI interface. If you use two AXI DMAs, both channels have a dedicated stream/memory mapped interface.
Similarly, if you need a lot of bandwidth, you should dedicate AXI_DMA0 to AXI_HP0 and AXI_DMA1 to AXI_HP1. This way you have dedicated parallel channels for each AXI DMA.
Regards,
Deanna
07-23-2019 01:35 AM
Hello thank you for your answer. I have a follow up question. Can you please elaborate on what you mean by "dedicating" each HP port to each DMA? Thank you again.
07-23-2019 08:39 AM
Hi @atmosfir ,
Let's say you have two AXI DMA IPs in your system and they both need to get to PS DRAM. You can choose to have those two AXI DMAs go through a SmartConnect and share S_AXI_HP0_FPD.
But let's say you have analyzed the bandwidth needs of your AXI DMAs and don't think a single S_AXI port to PS DDR can meet your requirements. You could instead choose to have AXI DMA 0 hook up to S_AXI_HP0_FPD and AXI DMA 1 hook up to S_AXI_HP1_FPD.
Regards,
Deanna
07-23-2019 08:34 PM
Hi @demarco
Thank you for the timely answer. Is this possible with a zynq 7000 SoCs, that do not have HP_FPD ports?
and can you confirm that this is the architecture you are pointing to?
Thank you so much for your time.
07-24-2019 08:12 AM
Hi @atmosfir,
Yes, the same reasoning would apply to Zynq-7000 and its S_AXI_HP ports. Your block diagram implements the "I need more bandwidth" scenario.
Regards,
Deanna
10-08-2020 09:57 PM
Thankyou for your wonderful explanation !!
Is it possible to connect multipltiple AXI_HP ports to a single axi-stream-dma. I have a stream dma of 512 bits wide going into my HLS IP, but each HP port is only 128 bits wide. How do I handle that situation ?
Regards
Sanjaya MV