A Test We Cannot Fail
Photo by Nijwam Swargiary on Unsplash
CAPTCHA — Completly Automated Public Turing test to tell Computers and Humans Apart. We see it regularly and pass it without giving it a second thought. Select the photos that contain a bridge. Sure, here you go. Now let me open an account with you since you know I am human.
It’s a gatekeeper ensuring that the internet isn’t overrun with malicious software looking to create havoc in our online lives.
But what if we had to pass a reverse Turing test? How confident would you be to be able to tell whether a piece of content was created by a human or a machine?
Having just completed a course with the AI Design Academy I have come away from the experience with mixed feelings. While I was blown away by the potential use cases for improving workflows and no-code methods democratising the field for creating new products, a certain part of me was concerned.
Perhaps that stems from my nature as a bit of a worrier. My friends would say I am slightly cautious and my critics would call me an outright pessimist, but my way of thinking is to consider all angles and try and figure out potential outcomes.
Generative AI is incredible. There are use cases that genuinely make me proud to be human, which for a grizzled pessimist like me is some feat. The Madrid Health Service is using AI to help diagnose rare diseases, reducing the time it takes from years to mere minutes. This collaboration between doctors and AI is the perfect example of responsible and supervised use in a healthcare setting.
In education, the technology is being used to democratize knowledge delivery to more people. Tailored lessons for each student can allow them to progress at their own pace and companies like Moodle are dedicating themselves to providing inclusive and lifelong learning opportunities for all.
However, generative AI is also cause for alarm. It is also proliferating at an exponential rate, both in terms of its ability to learn and the content it produces. Nina Schick warns us that 90% of online content could be created by AI by 2025.
Think about that for a moment. Imagine looking at your screen and seeing a video of an interview with your favourite celebrity but not knowing if it is genuinely them talking. Or opening your news app and not being sure if the article you’re reading about current affairs is both factual and written by a human hand.
If we end up in a place where we can’t be sure what is real and what is fake anymore online, where do we as users go? Deeper into our echo chambers? The dark web? What will the online ecosystem look like when we are sharing our space with AI and potentially being swamped by the amount of content it can produce?
We are already living in a world where AI is using facial recognition in a population to locate and eliminate targets autonomously, without human oversight. Who is accountable for these lethal systems? What should be the rules of conduct and who would enforce them?
In the USA, Congress has been debating the rise of AI-generated child abuse content leading to many parents requesting that their children’s photos are not posted online.
As with all technology, we have the choice of how to use it. As designers and content creators, in our sphere, we need to discuss how we can regulate and create a safe space for the innovative use of AI and how best to protect ourselves from malicious actors looking to take advantage of its capabilities.
Pandora’s box has already been opened. AI is here and only going to get smarter, quicker, and more effective. Most people will use this technology to solve a problem in their industry or community, but a minority inevitably will look to use it to destabilise our secure systems and create confusion as to the authenticity of content.
Protecting ourselves, our systems, and our institutions against that threat will define our experience — both online and in the real world — in the years to come.