How does clock glitching work on a device with multiple cycles per instruction such as the 8051 architecture?

All of the clock glitching examples that I have found, deal with MCU architectures such as AVR and ARM. These architectures are able to complete one instruction per clock cycle.

How would one go about clock glitching an architecture such as the 8051 where it typically takes 12 clock cycles per instruction? Would you make it repeat 12 times and have your external offset step up by 12 each time or would glitching a portion of the 12 cycles have an effect on the single instruction as a whole?

I have Googled around and I was not able to find any examples. Does anyone have any experience with an architecture such as the 8051 and clock glitching?

Thanks.