cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
ZynqUser
Visitor
Visitor
649 Views
Registered: ‎11-17-2020

Problem with AXI datamover MM2S

Hello,

I've been trying to use the mms interface of the AXI datamover , to move data from the ddr to stream. As you can see in the screenshot, the first and second packets have a latency of 2 cycles, however the third packet has a latency of 16 cycles from the end of the second packet. This pattern repeats itself, and i was wondering if someone knew why is this happening ?

Bellow, you will find attached screenshots of the configuration of the datamover, a screenshot of the packets, as seen from ILA and a screenshot of the given commands 

Thank you

ILA : 

packets : 

ZynqUser_0-1605609640114.png

commands : 

ZynqUser_1-1605609732426.png

AXI datamover configuration : 

ZynqUser_2-1605609892300.png

ZynqUser_4-1605610019385.png

 

 

 

 

 

 

As you can see i have the xcache enabled. The value if xcache is 1.

Tags (3)
0 Kudos
7 Replies
dgisselq
Scholar
Scholar
621 Views
Registered: ‎05-21-2015

@ZynqUser ,

Wow, that is *really* pitiful.

I'm tempted to believe the problem is on the MM side, since TREADY holds high through the whole transfer.  Let me ask, therefore, how is your AXI interconnect configured between the Zynq and this MM2S?

Dan

0 Kudos
ZynqUser
Visitor
Visitor
614 Views
Registered: ‎11-17-2020

Hello @dgisselq ,

thanks for the reply.

I'm sending you a screenshot of my block diagram and the configuration of the axi interconnect

ZynqUser_0-1605615809784.png

the said axi datamover is the axi datamover 0 and as you can see, it is connected with an axi interconnect to HP0 of the zynq.

The configuration of the axi interconnect is : 

ZynqUser_2-1605615998528.pngZynqUser_3-1605616020284.png

 

ZynqUser_4-1605616039495.png

As for the TREADY it is designed to always be 1, since all the data go to a MISR for consuming

Thank you

0 Kudos
dgisselq
Scholar
Scholar
585 Views
Registered: ‎05-21-2015

@ZynqUser ,

Tell me about the custom optimization strategy?  One possibility is that you are optimizing for area.

Dan

0 Kudos
ZynqUser
Visitor
Visitor
580 Views
Registered: ‎11-17-2020

@dgisselq ,

The custom optimization strategy is the default selection of vivado, i haven't created a custom strategy.

However, when i changed to maximize performance : 

ZynqUser_0-1605629287056.png

ZynqUser_1-1605629307208.png

ZynqUser_2-1605629325300.png

the performance improved a bit but still there were some packets with big latencies :

ZynqUser_3-1605629461600.png

 

Thank you

 

0 Kudos
dgisselq
Scholar
Scholar
570 Views
Registered: ‎05-21-2015

@ZynqUser ,

Yes, that is better, but it is still pitiful.

If the interconnect is optimized for performance and that's what you are getting, then let me ask ... what does the AXI bus look like on the other side of the interconnect?  Between the PS and the PL?  Can you tap it there?  That might tell us more about the performance you are getting, since that's where I expect any residual problems might be lying.

Dan

0 Kudos
ZynqUser
Visitor
Visitor
529 Views
Registered: ‎11-17-2020

@dgisselq ,

ILA : 

ZynqUser_0-1605713422514.png

 

As you suspected there seems to be a problem on the mm side. For some reason the DDR has a lot of delay when sending some of the packets. Which i don't know where they come from. Any ideas of narrowing the problem down would be very useful.

Thank you

0 Kudos
dgisselq
Scholar
Scholar
519 Views
Registered: ‎05-21-2015

@ZynqUser ,

You should be able to sustain a transfer rate using the MM2S.

What happens if you increase the burst size from 16 to 256?

Also, there are open source alternatives if this doesn't work out for you.  (That don't have the bugs Xilinx's implementations have ...)

Dan

0 Kudos