05-30-2016 08:08 AM
I have an array of integers with positive and negative values (sine) and when multiplied by 3 there is an error in the sign ((-177 * 3) + 2048 = 2579) when it should be 1517. In the simulation if it works, but in the fpga no. This is a part of the code:
datos : OUT std_logic_vector(11 downto 0)
architecture Behavioral of senoYdac is
type tabla is array (0 to 23) of integer;
signal calculo: integer ;
constant seno : tabla := (0,177,341,482,591,659,682,659,591,482,341,177,0,-177,-341,-482,-591,-659,-682,-659,-591,-482,-341,-177);
valorSeno := valorSeno+1;
calculo <= (seno(valorSeno)*3+2048);
datos <= conv_std_logic_vector(calculo, 12);
I hope your help!!
05-30-2016 11:53 AM
When you say it works in simulation, I suppose you mean Behavioral Simulation? Doe it also work in Post-Translate Simulation?
Which device are you targetting? Spartan 3? Spartan 6?
If this is Spartan 3, then possibly you should try to use the "new parser" to see if that changes the behavior.
05-31-2016 03:58 AM
I think you have more issues that the one you mention because a 12-bit signed value can only cover the range -2048 to + 2047. Therefore, the addition of 2048 to any of your positive values doesn’t really work; it may just appear to work because adding 2048 is changing the sign (MSB) of a 12-bit representation whilst leaving the lower 11 bits the same.
For example, ((659 * 3) + 2048) = +4025. This is FB9 hex but in a 12-bit signed system that means -71. However, (659 * 3) = +1977 which is 7B9 hex which has exactly the same lower 11-bits.
The following answer record relating to ‘conv_std_logic_vector’ may also be useful.