Multi-port distributed RAM primitives (RAM32M16, RAM64M8, etc.) emulate 1-write, many-read port RAMs by gluing together several simple dual port RAMs, one in each LUT. Presumably this means that it is possible for the memory contents of those RAMs to disagree with each other such that different ports will read different values, at least until those locations are written. Does anyone know how likely this is to happen and what potential causes might be? Obviously SEUs would be one potential cause, but is it possible for some sort of transient timing violation to cause the RAMs to become inconsistent, perhaps during async reset assert or PLL re-locking?
I'm asking about this because I am working on some code that uses distributed RAM to store some state that is shared between multiple independent entities, but I am attempting to write it in such a way that the initial state of the RAM doesn't matter so I don't have to add state machines and write arbitration logic to initialize all of the RAMs after a reset. Basically it's a producer/consumer setup with two different entities accessing the RAMs, but with only one write port per RAM I have to get a bit creative. There are basically two different data access methods that I need, one of them is an array of flags that one entity can set and the other entity can clear, the other is an array of producer and consumer counters that are used to track started and completed operations. The flag array works by storing the state as the XOR between the contents of two 1-bit wide RAMs, and either entity can change any flag by reading the other RAM, XORing with the new flag state, and writing to its own RAM. For the counters, when a new operation starts, the producer reads the consumer value and writes it to its own RAM so that they start at the same value. However, I just realized that my techniques won't work correctly if the RAMs can return different values on different read ports. I'm assuming that this won't be a major issue at run time - aside from SEUs I suppose - but it could potentially be an issue when the design (but not the whole FPGA) is reset.