Thanks for the report on this. I’m not sure how this happened, as we’ve avoided touching the analyzer part of the code too much. Anyway, while we track down this issue, I’d recommend using the LASCAR or SCARED analysis libraries for large amounts of traces, as they’re much faster.
This has been fixed on the latest develop commit. Basically, there was a floating point precision issue that occurred when the denominator in the correlation calculation got too large. For now we’re just casting to long double, so the issue should at least be pushed back to a later trace.