Unpack click-storm

When usability testing, participants may click-storm so furiously that I can't follow. Even if the whole storm takes only a few seconds, it can uncover interesting usability issues.
More often than not a lot of clicks means that the first clicks didn't do what they were supposed to, so the participant tried to click a whole bunch of other things, sort of a trial-and-error.
If I see a click-storm, I prefer stopping the participant and kindly asking them to repeat what they did and slowly, so I can see and hear.
DO
- Participant: [furiously clicking around, click, click, click... ]
- Researcher: “So you clicked first somewhere around here... Can you do it again, slowly, and narrate your thinking?”
- Participant: "Sure. So first I tried to click this card but nothing happened... is this even clickable? Then I thought this 'play' icon would run the report, but it didn't get the highlight when I hovered over it, so I assumed it was not clickable after all. Then I tried to click the card again to no avail. Finally, I clicked the pencil and that ran the report."
- Researcher: [Get it! He clicked the card, assuming it would run the report. It did, but he didn't notice the progress bar at the top of the page (issue #1), so he assumed nothing happened. Then he hovered over the pencil icon, but assumed it was not clickable because he expects buttons to have a hover state. (issue #2)]
Asking participants to repeat and slowly is especially useful to debug micro-interactions. The trial-and-error tribulations of microinteractions happen so fast that we simply can't see them at normal pace.
The risk of the "repeat and slowly" technique is that participants might not recall their clicks exactly as they happened the first time. You can cross-validate that by rewatching the screen recording later.
Member discussion