03-28-2017 11:38 PM
I try to use Petalinux on core 0 (xapp1078) for mass-storage support and file system support.
the main application is running on core 1 with my bare metal app using lwip. This application is working fine as standalone bare-metal. it connects to a host via tcp as client and sends data with 80MB/s in a stream.
In combination with petalinux, i dont get any connection. i start the "tcp_connect()" funktion but i dont receive its "connected_callback()". in whireshark i only see the ARP communication between host (192.168.202.86) and zynq (192.168.202.10). after this there should be a tcp communication, but this doesnt works and the fpga trys to connect again...
i tryed to configure petalinix' ethernet as "manual" as well as "ps7_ethernet" with static ip (192.168.202.10) and mac.
what could be the problem?
03-28-2017 11:43 PM
is there an option in petalinux to disable ethernet completely???
as you se in the picture, if i configure the ethernet phy via lwip in bare-metal, also the petalinux recognises it. it would be fine to test next what happens, if ethernet is completely disabled in linux.
oh, and i am using a microzed avnet board with petalinux 2014.4
03-29-2017 04:09 AM
03-29-2017 10:55 PM
I think that not linux directly is the problem. i think that the modified BSP maybe hase no connection with the low level driver.
yesterday I started to print the tcp debug options and i saw, that the SYN-packet will never be sent. that means that the SYN+ACK from the host will never be sent to...
But i dont know how to fix this problem. it seams that nowbody before used petalinux together with bare metal and lwip...
02-20-2018 02:33 AM
this message is quite old, yet I am facing the same issue in my design. Did you manage to find a workaround?
At first, I started to Linux CPU0 and baremetal CPU1+LwIP. Then I realized that even with both cores running baremetal (Hello world and LwIP echo server examples), still CPU1 does not have access to ETH0.
02-20-2018 05:00 PM
Both linux and lwip need an interrupt to work, however 7-zynq only has one INTC which cannot be shared with two types of OS
02-20-2018 11:54 PM
thank you for your answer.
I agree that CPU1 is not initializing the SCU, yet the CPU0 does once. The interrupt sources can be then mapped to the SCU with the IRQ mapping function
XScuGic_InterruptMaptoCpu(&InterruptController, CPU_ID, INTERRUPT_ID);
It does map the shared interrupt sources either to CPU0 or CPU1. Indeed it is working with simple interrupt generator in the PL that I tested. The only issue I faced is with the LwIP implementation. As far as I recall, the eth driver only uses the EMAC IRQ (#define XPS_GEM0_INT_ID 54U). Could you confirm that?
In any case, in the attached forum link, I also managed to simplify the system setup by using baremetal application in both cores. So Linux is not used anymore.
CPU0 -> running "Hello world" example
CPU1 -> running "LwIP echo server" example and USE_AMP active
Even mapping the XPS_GEM0_INT_ID to CPU1 It does not work either.
03-15-2018 12:29 AM
I 'am meeting the same case now.
My system configuration is as follows:
Vivado 2015.4 on a windows 7 host.
z7035 board which designed by myself.
There is one bug with the function XScuGic_InterruptMaptoCpu(&InterruptController, CPU_ID, INTERRUPT_ID),so i rewrited it.
void myXScuGic_InterruptMaptoCpu(u8 Cpu_Id, u32 Int_Id)
u32 RegValue, Offset;
RegValue = ((Xil_In32(((0xf8f01000)) + ((((u32)0x00000800U + (((Int_Id)/4U) * 4U)))))));
Offset = (Int_Id & 0x3U);
Cpu_Id = (0x1U << Cpu_Id);
RegValue = (RegValue & (~(0xFFU << (Offset*8U))) );
RegValue |= ((Cpu_Id) << (Offset*8U));
((Xil_Out32((((0xf8f01000)) + ((((u32)0x00000800U + (((Int_Id)/4U) * 4U))))), ((u32)(((u32)(RegValue)))))));
And It was called like this:
03-16-2018 06:57 PM
I resolved this problem.
The reason for such case due to the cache L2(OCM) which shared by cpu0 and cpu1.
My solution like this:
1/ Disable L1 cache for OCM at the begin of Main() function
2/ Make a patch for translation_talbe.S:
.word SECT + 0x14de6
After rebuilding the project, linux running on cpu0 and bare-metal lwip app running on cpu1 work very well.
03-17-2018 03:06 AM - edited 03-19-2018 03:56 AM
thank you for your reply.
I am currently running tests to follow your solution. Is that L2 cache that was conflicting on the second core? If that, why did you disable the L1?
Edit: I modified the xemacpsif_dma.c file with the custom XScuGic_myInterruptMaptoCpu call (this was discussed also here CPU0 with LInux and CPU 1 with baremetal - Interrupt not triggered )
Also the translation table is updated as yours. However nothing changes on CPU1. It just sends an ARP message but it does not reply to pings. It seems that interrupts are not triggered.
04-12-2018 02:44 AM
04-19-2018 08:08 AM
I have patched XScuGic_InterruptMaptoCpu() but still unable to receive the interrupts in processor 1.
Could you advise where should I insert XScuGic_InterruptMaptoCpu() ?
For my case, XScuGic_InterruptMaptoCpu() is inserted in main() before lwip threads are started.
04-20-2018 06:49 AM
please find below the code snipped where XScuGic_myInterruptMaptoCpu is called.
XScuGic_Connect(&Intc, <XPS_INT_ID>, (Xil_ExceptionHandler)irq_callback, irq_callback_arg); XScuGic_myInterruptMaptoCpu(&Intc, XPAR_CPU_ID, <XPS_INT_ID>); XScuGic_Enable(&Intc, <XPS_INT_ID>);
It is worth to mention that in SDK 2017.1 and above the XScuGic_InterruptMaptoCpu has been fixed and it works without any more patches.