Beyond Black Mirror: Illusion, risk, and ethical AI
Dive into what it takes to develop and deploy AI ethically, based on our Generation AI podcast
Nefarious criminals watching your every move, a system of social ranking that decides your life’s path, tech as a tool for healing and connectivity — pop culture from Black Mirror to iRobot to Blade Runner has people excited (and scared) about the potential power AI delivers straight to our fingertips.
But these references leave people with perceptions of AI technologies that may not be quite accurate, according to Miriam Vogel, President and CEO at EqualAI and Chair of the White House’s National AI Advisory Committee. Miriam joins us on an episode of Generation AI to dive into what it takes to ethically and responsibly develop and deploy artificial intelligence.
The power of perceptions of AI (and why we should be critical of applications and limitations outside of fiction)
From AI experts like Miriam to everyday people just dipping their toes into the digital world, there is a growing interest in both diminishing the potential harm and leveraging the unlimited possibilities of increasingly intelligent tech. From the beginning, the endless possibilities of innovation and positive applications excited Miriam, attracting her to the field.
Like many others, some of Miriam’s first experiences being exposed to AI systems were from pop culture.
“I’d seen iRobot and other movies and read the books along the way,” she says. “But with the Black Mirror episodes, just trying to imagine the technology that was almost at our fingertips, if not already there — that just completely changed our reality by taking it to the extreme.”
This extreme exposure truly frightened thousands, sparking immediate controversy and concern at the public release of ChatGPT, which then marked the cascading development and launch of countless AI tools.
While some of these perceptions inspired fear and others revealed possibilities, the power of perception quickly revealed itself as questions, ideas, and potential implications of the use of AI swirled social media, news platforms, and legal counsels. Still a murky area of regulation, application, and basic understanding, AI, particularly that of a generative nature, remains largely unexplored. But according to Miriam, it’s vital to remain critical of AI applications outside of fiction and keep realistic expectations (and concerns) top of mind.
The importance of privacy and ethical development
The technology that serves as a foundation for every potential nefarious application could also be utilized for a myriad of positive outcomes. The use of this technology, especially in areas like automation and AI, heavily relies on strong ethical principles and respect of privacy. These principles largely shape the impact of these AI technologies.
According to Miriam, there is a difference between ethical AI and trustworthy or responsible AI development.
“Ethical AI is the highest bar and the most challenging to define because ethics is so dependent upon where you sit,” she says. “It has to be so personalized, depending on the user, the various users, the creator, and all the interplay there.”
While AI ethics are important, Miriam finds it helpful to place focus on trustworthy and responsible AI. Every company and stakeholder cares about being responsible — they want their products and companies to be trustworthy and reliable. When powered or touched by AI in any form, Miriam suggests asking the following questions about your service and company:
- Is it safe and reliable?
- Is it made for the intended user?
- Are users being seen and heard?
- Is data being used in the intended way and not used against the user?
- Is personal privacy being respected?
- Is it usable? And is in-home deployment usability clear?
- What is the system trained on? Is it biased? Will it actually see your intended user?
At EqualAI, ensuring data ethics and privacy are in place is mandated. The organization has done this by building an infrastructure of expectations to ensure what they call “good AI hygiene.”
“You need a framework in place to make sure that when used in pivotal ways, it’s safe, it’s accounted for,” Miriam says. “Luckily, we have many great frameworks to choose from.”
These ethical standards and frameworks are relatively new as companies navigate the quickly growing and increasingly intelligent world of AI. Organizations such as NIST offer risk management frameworks and tools to help ensure your business’s AI system and tech stack are safe for your business and your end user.
Creating a safe outlet and reporting system for questions and concerns from internal and external stakeholders is also vital for ensuring safety and longevity of an AI strategy.
“AI is going to continue to iterate learning patterns. You want to make sure you’ve built trust so people can tell you about a problem before it becomes too big,” Miriam says. “You want to standardize your uses and processes. You want to document and audit them because AI is going to continue to iterate.”
Government’s role in AI regulation and today’s political landscape
To successfully implement and enable ethical AI on a large scale, there needs to be a multi-stakeholder approach. By integrating multiple points of view and angles into the development, integration, and maintenance of AI, the technology can be better understood and ultimately leveraged for maximum effect.
In order for this to be possible, the government will need to play a key role in making sure people are AI-ready and have the digital literacy necessary to understand the technology that empowers and supports systems.
“It’s absolutely incumbent upon the government to make sure we are capable of participating in an AI economy. Jobs will be lost, but the data shows many more jobs will be created,” Miriam says. “We need to make sure the population is ready to participate so they can enjoy being part of the AI economy and the benefits that come with it.”
Governments worldwide have already begun diving into policy development and regulation. From privacy protection to education efforts that begin in K-12 environments, AI will not escape being fundamental across political agendas — nor should it.
“There’s a lot of bipartisan support and interest in the fundamentals of making sure that AI is consistent with our current laws and thinking about the horizon — understanding the challenge we have here of understanding how AI should look in our AI regulatory policy future,” Miriam says.
Does the current state (and future) of AI technology scare or excite you?
Listen to the full episode of Generation AI to hear our full conversation with Miriam and learn more.