UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Contributor
Contributor
11,850 Views
Registered: ‎02-24-2014

Large BRAM access using mmap

Jump to solution

 

Vivado 2015.2

SDK 2015.2

Petalinux 2015.2

ZedBoard Rev C

 

Hi

 

I have a question about accessing contiguous areas of memory, larger than a linux memory page size.

 

My scenario is this:

  • A BRAM of 8192 bytes is present in the design at base address of 0x43C00000
  • The BRAM is represented correcly in the pl.dtsi
axi_bram_ctrl_0: axi_bram_ctrl@43c00000 { 
compatible = "xlnx,axi-bram-ctrl-4.0";
reg = <0x43c00000 0x2000>;
...
  • The BRAM is also available to the UIO driver from system-top.dts
&axi_bram_ctrl_0 { 
compatible = "generic-uio";
};

...
# cat /sys/class/uio/uio0/maps/map0/addr
0x43c00000
# cat /sys/class/uio/uio0/maps/map0/size
0x00002000
  • On the ZedBoard, I can use devmem to read and write to the whole 8k area. Note that devmem (and lots (!) of other examples online) use 4096 as the map size and simply map a single page around a desired address.
# devmem 0x43c00000
0xDEADBEEF
# devmem 0x43c01C00
0xCAFEFADE
  • In my application, whether I use UIO or not, I can mmap the whole area successfully but if I try to access offset 0x1C00 (and presumably above) from the base I get a SEGFAULT

Here are the pertinent areas of my test program (error checking etc. omitted)

int fd = open("/dev/mem", O_RDWR); // or fd = open("/dev/uio0", 0_RDWR);

FILE * size_fp = fopen("/sys/class/uio/uio0/maps/map0/size", "r");
size_t uio_size;
fscanf(size_fp, "0x%08X", &uio_size);
fclose(size_fp);

FILE * address_fp = fopen("/sys/class/uio/uio0/maps/map0/addr", "r");
size_t uio_address;
fscanf(address_fp, "0x%08X", &uio_address);
fclose(address_fp);

size_t offset = 0 * page_size;	// for map0
void * mem_ptr = mmap(NULL, uio_size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, offset);

This is successful.

 

My test then increments through the mapped area writing and reading a test pattern. My debug output using 0xDEADBEEF as a test pattern:

Configured for ZedBoard...
Mapping 0x00002000 bytes at 0x43C00000
** mem_ptr: 0x36f27000 **

...
Writing to 0x36f27000, offset:00001BF8 OK Reading from address:0x36f27000, offset:00001BF8 OK - read DEADBEEF Writing to 0x36f27000, offset:00001BFC OK Reading from address:0x36f27000, offset:00001BFC OK - read DEADBEEF Writing to 0x36f27000, offset:00001C00
<SEGFAULT on mem_ptr access here>

As I said, the same happens whether I open /dev/uio and then memmap, or open /dev/mem and memmap directly.

 

I believe there is no limit to the size mmap will accept and indeed it does not complain as long as the arguments are aligned with page boundaries. However, I can make the test work by mapping in page size (4096) blocks and storing a vector of mapped pointers. Obviously, this is messy and adds another layer of interfacing to the memory mapped area.

 

The same also occurs (although reaching a different offset) with other AXI peripherals such as GPIO or a custom slave.

 

Does anyone know of any limitations in this area or am I missing something?

 

Thanks

Chris

 

 

Tags (4)
0 Kudos
1 Solution

Accepted Solutions
Contributor
Contributor
21,705 Views
Registered: ‎02-24-2014

Re: Large BRAM access using mmap

Jump to solution

Well it looks like this is a PICNIC issue.

 

Your test code worked fine for me. Comparing the code around the memory access loop, I found I was incrementing an offset to the base address by 4 each time which, when added to my cast pointer, resulted in the address incrementing by 16.

 

Once I fixed the handling of my memory offsets, the code worked fine.

Stumped by pointer arithmetic. How embarrassing. :)

The positive side is that my UIO and mmap were working all along.

 

Incidently, the 0x1C00 offset mentioned earlier corresponds to the start of the next occupied space in the the memory map so that's why the fault occured there in that particular case. 

# cat /proc/983/maps
36f95000-36f97000 rw-s 00000000 00:06 1079 /dev/uio0
36f97000-36f9c000 rw-p 00000000 00:00 0
36f9c000-36f9d000 r--p 0001f000 00:02 355 /lib/ld-2.20.so

36f95000 + (0x1C00 * 4) = 36f9c000

 

Many thanks for your help once again.

Regards

Chris

 

 

0 Kudos
7 Replies
Xilinx Employee
Xilinx Employee
11,841 Views
Registered: ‎09-10-2008

Re: Large BRAM access using mmap

Jump to solution

Hi Chris,

 

I can't see what you'd be doing wrong but I know I've used mmap on memory for DMA with big areas of memory without issues.  I've not done that for a while (several kernel versions ago) but I can't imagine how this would be broken.

 

What other offsets have you seen fail on other devices?  I'm not sure how that 0x1c00 offset could be related.

 

Sorry not much help, but I'll keep my eye out for anything that might be an issue,

John

0 Kudos
Contributor
Contributor
11,836 Views
Registered: ‎02-24-2014

Re: Large BRAM access using mmap

Jump to solution

Hi John

 

With a GPIO peripheral on the same build I've seen the following:

address: 0x41210000

length: 0x00010000

Fails at 0x4000

 

on another build for a custom board with a custom AXI slave:

address: 0x43C20000

length: 0x00020000

Fails at 0x8000

 

 

On these two examples, the fault happens 25% of the size so I immediately thought it was a problem with byte vs. uin32_t. I can't see it though - the uio size file reports in bytes (since the BRAM size is reported as 8k) and that is what I've using for the mmap.

 

Regards

Chris

0 Kudos
Contributor
Contributor
11,829 Views
Registered: ‎02-24-2014

Re: Large BRAM access using mmap

Jump to solution

John

 

I had been reading your driver PDF material including one on DMA. As I understand it, though, you are mmap-ing the size of the proxy driver interface while the buffer itself is handled in the kernel driver.

https://forums.xilinx.com/xlnx/attachments/xlnx/ELINUX/10693/1/Linux%20DMA%20from%20User%20Space-public.pdf

 

I wonder is it signifcant that other examples only stay within a page size area... Although mainline Linux examples will happily map a large file to memory.

 

Chris

 

0 Kudos
Xilinx Employee
Xilinx Employee
11,825 Views
Registered: ‎09-10-2008

Re: Large BRAM access using mmap

Jump to solution
I don't follow you there as the memory is allocated by the kernel driver but it's all mmapped into user space as the buffers and interface info are all in that memory. But it's been a while so I might have forgot some details.
0 Kudos
Xilinx Employee
Xilinx Employee
11,815 Views
Registered: ‎09-10-2008

Re: Large BRAM access using mmap

Jump to solution

Hi Chris,

 

I can't duplicate what you're seeing and I just built the system on 2015.2 tools.

 

I attached my hacky app that I verified with devmem.

 

I didn't do anything special.  See if you can see something different.  I have two 8K brams in there.

 

Thanks

John

0 Kudos
Contributor
Contributor
11,801 Views
Registered: ‎02-24-2014

Re: Large BRAM access using mmap

Jump to solution

Hi John

 

Thanks for taking the time to test this.

I'll compare your test with mine.

 

Regards

Chris

 

0 Kudos
Contributor
Contributor
21,706 Views
Registered: ‎02-24-2014

Re: Large BRAM access using mmap

Jump to solution

Well it looks like this is a PICNIC issue.

 

Your test code worked fine for me. Comparing the code around the memory access loop, I found I was incrementing an offset to the base address by 4 each time which, when added to my cast pointer, resulted in the address incrementing by 16.

 

Once I fixed the handling of my memory offsets, the code worked fine.

Stumped by pointer arithmetic. How embarrassing. :)

The positive side is that my UIO and mmap were working all along.

 

Incidently, the 0x1C00 offset mentioned earlier corresponds to the start of the next occupied space in the the memory map so that's why the fault occured there in that particular case. 

# cat /proc/983/maps
36f95000-36f97000 rw-s 00000000 00:06 1079 /dev/uio0
36f97000-36f9c000 rw-p 00000000 00:00 0
36f9c000-36f9d000 r--p 0001f000 00:02 355 /lib/ld-2.20.so

36f95000 + (0x1C00 * 4) = 36f9c000

 

Many thanks for your help once again.

Regards

Chris

 

 

0 Kudos