03-13-2018 07:18 AM - edited 03-13-2018 07:24 AM
I'm geting quite desperate here since my verification/validation of a simple project is delayed by days/weeks due to a misinterpretation that I can't figure out the reason to.
I'm doing a simple project in which I have 32 bit fixed numbers (16 bit integer, 16 bit fractional part) and a 64 bit fixed number (32 bit integer, 32 bit fractional part).
I want to have a simple decimal interpretation of these numbers in the wave window where I use the "Custom fixed/float" option.
The 32 bit number is interpreted by: fixed#16#decimal#signed and it works PERFECTLY!
The 64 bit number is interpreted by: fixed#32#decimal#signed and it works Not At All, Nada, Zip!!!
I just can't get my head around this, is there any knowing soul out there that might be able to point out any mistake by my side?
In the attached pic the numbers are:
32-bit: 0000000000000000 . 0000001000000000 (i.e. 1/128 = 0.0078125)
64-bit: 00000000000000000000000000000000 . 00101110000000000000000000000000 (i.e. 1/8+1/32+1/64+1/128 = 0.1796875)
It can be seen that 1/128 is presented correctly, while the 64 bit number presented as: 771751936.00, which is the 31 LSB of the fraction interpreted as the integer: 0101110000000000000000000000000.
03-13-2018 05:21 PM
Could be a bug..
You can try "combine signals" to create a 63-bit by dropping the lowest bit ( i.e your_signal[63:1] ). Then make a custom fixed#31#decimal
Or, put a "real" type signal in your design that shadows the bit vector. May need a custom function to convert the bits to real.. All subject to whatever precision floating supports.
03-15-2018 07:50 AM
I tested to increase the bit length by 2-8 bits in various stages from 32 bits up to 62 and a custom interpreter worked for all tests. (need symmetry in integer, fraction so even bits only)
But for 64 bits it completely misreads. I think you are right here, there's no explanation except a bug in Modelsim, at least for me right now.
Thanks for your effort! :)