07-28-2016 03:08 AM
Hello,
I want to convert a 10 bit signed Integer into Float format with the Floating-point IP 7.1
When I enter the following values in the "Precision of Inputs" tab, I still get a 16 bit input. Can someone explain this behaviour? I couldn't find a reason for this in the documentation, but maybe I have overlooked something? Or is it a bug?
Many thanks in advance!
Christian
07-28-2016 03:26 AM - edited 07-28-2016 03:29 AM
you need to paid dummy bits to make sixteen. You can use LSB 10 bit for your input and pad rest to 6 bits . this is requirement of AXI interface
check core product guide for detail
07-28-2016 03:26 AM - edited 07-28-2016 03:29 AM
you need to paid dummy bits to make sixteen. You can use LSB 10 bit for your input and pad rest to 6 bits . this is requirement of AXI interface
check core product guide for detail
08-05-2016 02:06 AM
Thanks for your explanation! That cleared it up.