10-09-2019 10:54 AM
Why doesn't the AI Engine support double precision floating point? How much bigger would the processor be if it did 8 double precison vectors? My application requires double precision accuracy. Must I build it out of PL fabric?
10-10-2019 01:33 AM
It is just the way the architecture has been defined looking on the requirements of most industry. It is impossible to match with the requirement of everybody.
If you do double precision floating point, then you might have a bigger core thus less compute power
10-10-2019 05:55 AM
Is supercomputing not a significant industry to Xilinx. Let's compare GPUs which compete for the same cloud slots at AWS(Amazon Web Service). AI Engine 0% DP(double precision) to SP(single precision) performance. Gaming GPUs 10% DP to SP. Compute GPUs 50% DP to SP. Doesn't AI training us DP? Is Xilinx only interested in deployment of AI?
Thanks again for your reply. I know of no other means of communicating with Xilinx. I believe Xilinx is missing the processor architecture design and simulation market. Processors are the most adaptable form of hardware. FPGAs provide the best material to build and simulate multiprocessors. One could create a FPGA multiprocessor and simple software tools to compete with custom ASIC like FPGA design produced by HLS. Such a processor could be sold with an FPGA as a end user software only reusable design. Custom ASIC like FPGAs need difficult hardware redesign. Custom Aplication area multiprocessors which the AI engine is, need compilers. How does Xilinx adress programming the AI engine?
Genesis One Technologies