04-05-2011 02:01 AM
I was doing a few software- to hardware portations in the computer vision field..
After the first one I came up with the following procedure:
1. Create a test bench, which gives you "hard" results for a certain algorithm (e.g. for visual feature detectors and descriptors use images with known homographies relating them, and test how many points are found and related in both images by your algorithms)
2. Create an early model of your algorithms FPGA implementation, and define the critical internal data.
Critical data in this case is data that needs to be stored in Block Rams (if using Linebuffers), needs a lot of parallel processing, etc (any data that would use many FPGA resources, if it had too many bits)
3. For every critical internal algorithm data, change your algorithm, so that you can specify a maximum accuracy for this data, i.e. the number of bits you want to allow.
4. Calculate the impact of changing the bit-accuracies on your test scenario and put them into curves.
5. Analyse your curves and select an appropriate number of bits for every critical data step!
Then start with the actual FPGA design!
This seems to work out quite well for me!
Can I ask you what kind of algorithm you are porting and what purpose it is going to be used for?
04-24-2012 04:53 AM
That is a tough task for a newbie. In fact that is a tough task for anyone! The problem you are trying to solve is an active research issue in the FPGA design field. Accelerating the pseudoinverse computation of matrixes by hardware is a very complex and challenging task. I just think they asked you for too much.