02-24-2011 06:49 AM
During the initial setup of the system in EDK, there is an option of selecting which are cacheable memory regions. So from what I understand, the ones not selected will not have their data cached and every time the memory location will be accessed.
My question is - If I add an additional BRAM on the PLB and mark it as non cacheable, is the non cached tag still applicable when you port linux on the board? Or does linux override the non cache selection and cache it like any other address when it is memory mapped? When I observed the access times for memory regions marked as cacheable as well as non cacheable, both are identical. What does this indicate?
Also, if I write a driver module against which I memory map regions from the BRAM and mark pages from one BRAM as non cacheable and other as cacheable, they show a good deal of difference in access times. Can I conclude from this that the memory which takes longer to access is not really being cached? Or is there a better way to confirm whether or not it is being cached?
02-24-2011 07:37 AM
Is this a microblaze or powerpc system? In EDK base system builder, selecting whether a region is cacheable or not just adds code to the test application that is created to turn on caches. Turning on cache for a region is done in software.
02-24-2011 07:53 AM - edited 02-24-2011 09:33 AM
This is a PowerPC system.
What do you think of the second part of my question? Is it safe to conclude that it is non cached if its access times are much larger than cacheable regions?
02-25-2011 07:36 AM
That seems logical that slower access times are non-cached. I don't know how to verify that, but I would think there's a way to see that in proc file system. I'll keep watch for a way to do that.
02-25-2011 07:45 AM
This is really a generic linux question. As far as linux is concerned, I *think* that plb_bram memory is just like any other peripheral memory by default, which wouldn't be cached unless you do something in a driver to tell it to be (which I have no experience with).
It seems like your assumption about access times is correct though - so is it behaving the way you expect it to or not?
Are you using ppc405 or ppc440?
02-25-2011 08:17 AM
I'm using the PowerPC440 system. Sorry for making you ask this again. Should've mentioned 440 before.
So here's what I've done - I allocate 3 BRAMs connected to the PLB. I've written my own driver to memory map a page of memory from each BRAM on each subsequent mmap() request from user code. So now the user code has 3 pointers pointing to pages in 3 different BRAMs. Then I run loops to access addresses from each page repeatedly so that those addresses would get cached. The access times seem uniform across all BRAMs and the values of flags in vm_area_struct->vm_page_prot seem to match that of a memory region in the DDR2 which is by default cacheable.
Now I mark one of the pages as non cacheable in the driver during mmap(), by setting its flag in vm_page_prot. Now the access time for that page are a lot more than that from the other BRAMs. Accesses from pages of other BRAMs still show the same timings as previous ones.
It is behaving as expected. But I just want to make sure that I am drawing the right conclusion by confirming this through some way other than just access times.