01-08-2019 08:30 AM
I used to work in VIVADO 2017.2 but the last month I switched to VIVADO 2018.3.
In VIVADO 2017.2, I was able to update the content of the .bit file with the content of the .elf file succesfully by following these specific steps:
1) Generate Bitstream in VIVADO.
2) Export Hardware and Generate .elf file in SDK.
3) Import the .elf file in VIVADO and associate the .elf file with the respective Microblaze (I use a single MB core in my design).
4) Execute updatemem command for generating the final download.bit, which contains both the FPGA and the Microblaze code.
After switching to VIVADO 2018.3 the above procedure stopped working. I searched for similar problems in the net and found no accepted solutions. So, I tried to dig a little deeper in order to solve the problem.
The weird thing is that programming the FPGA and the Microblaze seperately (using JTAG - System Debugger Session) works well. This fact led me to try and record the content of the Local Microblaze BRAM after using each of the two programming sequences (First is using download.bit and the second is programming the MB BRAM separately using System Debugger).
The results are shown in the attached picture. As you can see, it seems that there is strange behavior in the endianess of the data written in the Local Microblaze BRAM in the case of programming using the single download.bit file. In this picture I demonstrate an example of this behavior but you can verify that this stands for all the first 16 (32-bit) words of the Local Microblaze BRAM.
The only differences found in the two designs (the one in 2017.2 and the other in 2018.3) are the IP Core upgrades. There is no problem in the Reset Levels and the new design (2018.3) behaves as expected when programming using the System Debugger Session in the SDK Enviroment.
Is this a known issue? Is there any parameters that handles the elf file's endianness inside the final download.bit file?
Do I have to follow a different procedure in VIVADO 2018.3 compared to the one I described above?
Additionally I want to mention that I already checked the generated .mmi files in the 2017.2 design and in the 2018.3 design and I see some differences regarding the bit allocation of the local MB BRAM. I can share these results if needed.
Please give me an answer as soon as possible!
Thanks in advance.
01-08-2019 09:06 AM
Have a look at the wiki here which shows how to update the MMI. It sounds like you will need to update the bitlanes:
01-10-2019 08:25 AM
did you create the MMI file manually? The tools will auto-create this file for you.
Also, if you look at an MMI (discussed in the wiki) file you will note that the data is swapped (byte and nibble).
For example for a 32 bit wide, 16 BRAMs would look like:
This is what you are seeing.
01-11-2019 12:55 AM
Can you check my last post in this thread?
I have an 128 KB BRAM, so I should see: 7, 6, ... , 2, 1, 0, 15, 14, ... , 9, 8, 23, 22, ... , 17, 16, 31, 30, ... , 25, 24.
But instead I see this order: 0, 1, 2, ... , 29, 30, 31.
I think that there must be an error in the .mmi file generation process.
01-13-2019 05:09 AM
If this is the auto generated mmi then this looks like a bug. It should be easy to reproduce on my end. I'll take a look and create a CR to have it fixed if I seen it to be a bug.
For now, it looks like manually updating the mmi is needed (although a pain)
01-28-2019 01:50 PM
Thanks very much for figuring out what's going on here. I'm facing the same issue in Viv2018.3.
I haven't worked through editing the .mmi yet, but I confirmed that my download.bit has the same reversed-bit-lane issue that you describe, when the system.bit and app.elf files are combined by updatemem (but not when the system.bit is loaded with the bootloop.elf)
I will be happy to know if/when we can expect a bug fix from Xilinx?
01-28-2019 09:54 PM
There willl be a patch for the tools released via an answer record in the next day or so.
When I know the AR number I'll post it here too