Hello, does anyone know why using the correct key-guess to sort traces returns a high difference of means of power between the two pools (one pool contains all traces where the LSB is 0 and the other contains all traces where the LSB is 1), whereas using an incorrect key-guess returns a difference of means close to zero?
The attack works as follows
If your key guess is correct, then you successfully sort the traces into a group where the LSB is 0 and one where the LSB is 1. If you’ve got enough traces, then the ones where the LSB is 1 will, on average, have a higher power consumption where the data is being processed. Same for the other group, except these traces will have a lower power consumption than average.
For a simpler example, consider the two bit case. The LSB = 1 group will have all the traces where the data is 11, which consumes a high amount of power, and 01, which consumes an average amount of power. The LSB = 0 group has 10, which consumes and average amount of power, but it also has 00, which consumes a low amount of power. The difference is less stark when processing 8 bits, but it’s still there.
In this case, the data is effectively random. As such, both groups tend towards an average level of power consumption and therefore the difference between their averages is small.
But won’t using an incorrect key-guess also mean that the LSB = 1 group will have traces with 11 and 01, and the LSB = 0 group will have traces with 10 and 00? Wouldn’t the difference of means be the same regardless of the key-guess?
No, the sorting of traces is based on the keyguess, so if it’s wrong, the traces are effectively grouped randomly and their mean will be close to the mean of the entire group of traces.