01-09-2011 12:03 PM - edited 01-09-2011 12:36 PM
I am currently running Linux on a PowerPC 440 on the Virtex-5. I have multiple BRAMs connected to the PLB through their respective BRAM controllers. I wanted to know how do I access these memory locations through Linux. The /proc/meminfo is the same whether or not I have a system with extra BRAMs on the PLB. It always shows the 256MB of RAM which is connected to the FPGA. The xilinx.dts file shows these BRAMs listed but when I build a kernel using this .dts file and run that on the board, it does not seem to recognize the BRAMs.
Please guide me as to how I can access the BRAMs.
01-12-2011 07:41 AM
hey thanks for that. I had seen this on xilinx's wiki but thought there might be some other way.
I thought since we include the device tree file during kernel building, it will already have included the BRAM as part of its main memory. If not, then what exactly does the .dts file do?
01-12-2011 07:47 AM
You're right in that the kernel BSP could do some work to make BRAM available from the device tree info.
We don't do that yet. You need a driver or code in the BSP to do that. We'll keep that in mind for future work.
01-12-2011 08:28 AM
In the above driver, I am assuming that /dev/mem is initialized as a character device. So will this also work for BRAMs? or it doesn't matter what /dev/mem is as long as the base address and size matches that of the BRAM?
01-12-2011 08:31 AM
/dev/mem is just cool, it works with any memory address.
I'm using it right now to load memory that is outside of the kernel on the CPU I'm running on, which happens to be memory for the other CPU. I can load a different kernel and ramdisk into memory for the other CPU, then start it also.
It works well, you just have to get used to the fact that you're getting a virtual address that's a page boundary, then you have to add offsets to the that within the page.
Trust me, it's like a drug once you get it to work, and it's quick to get it to work.
01-14-2011 08:20 AM
Sounds pretty nice.
Is it possible to perform a dma transfer from such mapped memory.
I want to send that mapped memory via ethernet (or harddisk) without copying.
01-14-2011 08:36 AM
You can use central_dma block to perform DMA. However as far as I know there is no driver for central DMA in the Xilinx Kernel.
One of the easiest way is to create your own EDK IP that is able to perform bus mastering. Then you need to write a custom driver for your IP (quite simple since you only need to control the bus mastering of your IP ie. only set some registers via ioctl) or you can make a driver that work from user land.
You can see these post for more informations :
Hope this helps you
01-20-2011 11:00 AM
Refering to my first question, there is a method to integrate a driver in the linux kernel which is explained in 'Integrating an EDK Custom Peripheral with a LocalLink Interface into Linux' www.xilinx.com/support/documentation/application.../xapp1129.pdf . Is the same procedure applicable for the driver for BRAM?
01-20-2011 11:07 AM - edited 01-20-2011 11:08 AM
It's not nearly as complex as the locallink driver. The bram is simply memory mapped - it looks like any standard memory-mapped peripheral. If the /dev/mem method doesn't work for you, then you'll probably want to look into writing a generic character or block driver that allows you to map the bram space into kernel space in order to transfer data back and forth.
I still think using mmap may be your best bet - is there a reason you're looking elsewhere?
01-21-2011 07:50 AM
This is the first time I'm writing a driver for an embedded PowerPC so was just looking around for guidelines as to how to integrate the driver into the Linux kernel and not the driver writing itself.
So I think the best way is to include the driver code in a directory in the Kernel before compilation and add the path to the Makefile? Is it necessary to add an entry in the Kconfig file so that it shows up as an option during 'make menuconfig'? I was thinking I could skip that step.
01-21-2011 08:03 AM
You're just going to have to start digging around on the web, looking at other drivers, etc. I don't have a step by step process for you to follow.
Again, if it were me, I would just use the mmap method from user space and not mess around with a driver.
02-17-2011 04:00 PM - edited 02-22-2011 03:55 PM
I was able to mmap the BRAM and it actually turned out to be much more simpler that I was anticipating. Thanks for the guidance.
I have further queries on this BRAM though.
If while building the system, I have made this BRAM to be non-cacheable. From what I understand, these memory locations should not be cached. What I want to know is whether this is true only for systems running codes on the PPC directly or it holds true even when you have a linux kernel running on it. Does the Linux Kernel do something to override the non-cachable condition and still cache it?
Is there some why in which I can determine whether these locations are cached or not? Other than timing the accesses from these locations as compared to memory accesses to DDR which will be cached?