I was in the middle of a neuroscience study today, researching how the brain reacts to AI tools. As I stood behind one of the participants monitoring their brain, I noticed their reflection in the turned-off monitor next to me. The scene reminded me of the British series, Black Mirror. Have you seen it? It’s a series created by Charlie Brooker, who admitted that a literal meaning inspired the title. The "black mirror" refers to the reflective screen of a television, computer, or smartphone when it's turned off. There is a symbolic meaning too, as the screen reflects our own distorted and often dark image back at us as we engage with technology.
The reflection is never perfect, it’s skewed by angle, lighting, interference... So it is when we interact with AI. When AI responds, it reflects back a version of “us,” but it's never the whole self. As a guideline, if you’re working with AI tools or developing them for others, keep in mind that AI does not reveal the truth, it shows patterns. It’s important to keep cultivating critical thinking skills to know the difference. AI is handing us fragments. The responsibility (and the privilege) is still ours: to assemble the whole story, and to decide which reflection we trust.
After all, every time we look into a black mirror, we don’t just see technology, we see ourselves looking back. The question is: do we like the reflection?