10-08-2013 04:52 AM
I've been working on a VHDL code for making 2-D (256-pt * 128-pt) FFT, using Xilinx's 256-pt. and 128-pt. 1-D FFT IP Cores.
I'm using the following relevant settings for the 256-pt. IP Core:
# Channel(s): 1
# Transform Length: 256 (NOT Run-time configurable)
# Target Clock Frequency:27 MHz
# Implementation Option(s): Pipelined, Streaming I/O; Radix-4, Burst I/O (I've tested both)
# Data Format: Fixed Point
# Scaling Option: Unscaled
# Output Ordering: Natural
# Throttle Scheme: Non-Real Time
# Optimization Options: (Set to optimize performance)
In evaluating only the 256-pt. FFT part, I'm not getting expected results. For example, I'm not getting what I get by using MATLAB's 'fft' function, and am not getting expected outputs for basic sequences like, say a d.c. signal!
I'll elaborate a bit on what I'm doing and what's happening during simulation:
What I'm doing:
# Setting IP-Core Configuration to x"01", the last bit (which is 1) implying Forward FFT and the remaining bits unused; there's nothing regarding scaling here since I'm using the IP Core in Unscaled mode.
# Sending 16-bit long Input Sequence words to the Slave side of IP Core, by asserting Slave_Valid signal (Slave_Ready is normally always asserted, which means that the IP Core is always accepting input stream).
# Asserting Slave_Last at the last word of the 256-word sequence.
# Slave_Valid is a toggling signal applied by me, changing state (between High and Low) every CLOCK edge (for the reason that my incoming stream of 16-bit data, which I'm getting from some other source and sending to the IP Core, is stable for 2 CLOCKs, due to which I'm holding Slave_Valid as True for only 1 CLOCK cycle, or the IP Core would take up every 16-bit word twice).
# As per the IP Core settings, the 8 LSB bits form the real part while the 8 MSB bits form the imaginary part of the input sequence.
# Master_Ready when Master_Valid to receive evaluated FFT from Master (O/P) end of IP Core.
What I'm getting after simulation:
# An output sequence 48-bit word long, which as per IP Core setting means 24 LSB bits real and 24 MSB bits imaginary.
# Event_frame_started signal, which I've not yet understood the generation of.
# Event_tlast_missing and Event_tlast_unexpected are Low, which means that my input has been logically correctly sent, since the last (256th position) of my 256-word long sequence is what the IP Core had counted at its end and expected.
# Event_data_in_Channel_halt periodically (which I've not understood the pattern behind), even a few CLOCKs after the last 16-bit input word has been delivered to the IP Core.
Now, having mentioned in detail my work and the results, I'll put forward my doubts:
# Event_data_in_Channel_halt which I'm periodically getting, as explained above, is asserted by the IP Core only in case the IP Core expects input data but doesn't get any.
Worse, this signal is asserted even after I've asserted Slave_Last from my end, which means the 256-word long sequence has been correctly delievered from my end and despite Event_tlast_missing/Event_tlast_unexpected remaining unasserted by the IP Core, which means that has correctly received the input sequence at its end!
What this means is that even after receiving the whole 256-word sequence correctly, the IP Core is still expecting input data and that's why asserts those Event_data_in_Channel_halt pulses for some CLOCKs after the last 16-bit word of the input sequence.
# The entire result of the 256-point FFT doesn't match that obtained from MATLAB's inbuilt fft function. I don't know if this is because of the above problem. If due to some other, I'd like to know!
I'm in dire need of a bailout here, since I'm simply not getting any clues to get out of this situation. Any kind of suggestion/help will be appreciated!
10-08-2013 05:00 AM
To add to my previous post, I've tried using the Pipelined Streaming I/O as well as Burst I/O modes and am getting the same results. I was expecting the Event_Data_InChannel_Halt to vanish in Burst Mode since unlike Pipelined I/O where inputting, processing & outputting happen simultaneously, Burst Mode does inputting, processing and outputting in mutually exclusive steps. So, while inputting in won't process, making the appearance of Event_Data_InChannel_Halt even more difficult to understand.
Moreover, the Event_frame_started signal gets asserted midway during the transaction at first, even in Burst Mode, which means that the IP Core starts processing during Inputting, which contradicts the working definition of Burst Mode.