Happy New Year! I hope your holiday break was sufficient, and that you have all avoided the flu that is going around. I had a great holiday, and have been healthy, so I am not going to complain. I do have an issue that I need to discuss: making it look like science.
Ask no Questions, It is Science
Recently, I have seen a few examples of the misuse of Xilinx literature, to either justify some conclusion or make a case for or against an application. I will reveal a couple of them here: Rosetta1 data (ug116.pdf2) used for non-Xilinx components, and LANSCE3 cross-section used to justify the effects of thermal neutrons4.
The Rosetta program, which started in August 2002, has been the Xilinx way of measuring the actual soft error performance5 of our FPGA devices. Over the years, we have filed many patents, done many test chips (every technology node), and pioneered improvements at every opportunity.
To credit a product not from Xilinx with all of our improvements may seem strange, but since we are the only people who regularly publish our data, it is no big surprise to find all those blanks in a spreadsheet filled in with our numbers. Why? Simply because it would look really terrible to a manager if a soft error study review had a bunch of “don’t know,” and “not available,” or “only available under NDA” in all the little boxes.
So, what did they do, and hoped no one would notice? They took our data from ug116, Device Reliability Report, and attributed it to all the blank spots.
There are two things wrong with this. In the first place, the data is for Xilinx devices. Secondly, the actual devices in question use none of our techniques and are known to be at least twice as bad, and in some cases an order of magnitude worse.
So, let us be honest: do not use our data for other people’s parts! You are not only fooling yourself, and your management, but you may also be endangering your customers (in safety critical applications).
The Los Alamos Neutron Science Center (LANSCE) is a world standard facility for testing susceptibility to soft upsets. Its 800 MeV proton beam strikes a water-cooled tungsten target to create 0-800 MeV neutrons. The beam has an appreciable thermal neutron flux component that is pretty much uncharacterized, and not specified (thermal neutron count delivered is unknown, and may vary widely visit to visit). So, the neutrons that get counted for beam testing represent those from 10 MeV up to 800 MeV.
It is well known that some devices were fabricated with Boron 10 and, as a result, are susceptible to soft errors from thermal neutrons. For example, the Spartan-6 devices fabricated at Samsung in the 45 nm process have Boron 10, and this is easily seen by comparing the LANSCE cross-section, with the Rosetta (real) numbers. LANSCE is 129 FIT/Mb (12.9 n/hr * cross-section * 1 Mb * 1E9 hours). Rosetta is 185 FIT/Mb. The difference (+43%) is attributable to the thermal neutron flux. The thermal neutron flux is estimated to be about 6 thermal neutrons per square centimeter per hour. Multiply 0.43 by 12.9 and we get 5.5, so we are in good agreement--the increase in FIT/Mb is exactly what we would expect.
So, LANSCE data does not account for thermal neutrons, and the thermal neutron cross-section must be measured by an accepted method. LANSCE does have a setup to measure thermal neutron cross-section, but it is not done directly and it requires a thermal neutron shielded box made of cadmium. One can measure the upsets both in and out of the box, and by the established difference that Boron 10 is present. A qualitative measurement is, however, very difficult, as there is no real calibration possible. One is forced then to perform a real atmospheric test on an array (Rosetta) to get an actual value for the thermal neutron cross-section.