AI systems can help but can they also intimidate?
Here is a chart from a neuroscience study I conducted, in which I was comparing differences between humans vs. AI in a coaching context. The research question was: is AI competent enough to coach? In the study, I equipped sellers with EEG, ECG, GSR and eye tracking gear and asked them to complete a simulation with an angry customer and practice empathy skills using a checklist they received prior to the simulation. Once the simulation was over, participants received feedback on the items included in that checklist. During the study, I initially divided participants into two groups: some completed the simulation with an AI system and received feedback from the AI tool. And some completed the simulation and feedback phases with a human coach. In each group, I further sub-divided participants and primed them to believe that they were interacting with an expert (AIX or HX) or a beginner coach (AIB or HB).
This chart shows sellers’ voice activity during the simulation and feedback. Note that there are differences between the AI and the Human conditions. Sellers talk for a similar amount to what they perceive to be “beginner” vs. “expert” humans. But, they do show higher variability. This variability is greatly reduced when speaking with an AI system.
What can this data tell us in practical terms? If you’re training AI systems, which in turn are intended to train yourself and your teams, you can expect a more standardized or uniform performance compared to inviting people to train with human coaches. However, in the process, you have to decide: is standardized performance what you’re really after? Is there something to be lost with missing out on human variability, which is often tied to authenticity? Will the future be filled with armies of standardized sellers, which will then be not that different than standardized AI sellers? Are we prepared for that future?