Managing AI chatbots to be less discriminatory: An exclusive Q&A with EqualAI® President & CEO, Miriam Vogel
Why it’s up to us to “write and right” the future with EqualAI
LivePerson: Thank you for indulging us, Miriam. You have a very interesting background! Care to walk us through your career path that led you to EqualAI?
Miriam Vogel: Thank you for the interest! Early on, I worked at the intersection of technology, policy, and the law as a lawyer in private practice and in government. I had the opportunity to address the problem of bias in more traditional contexts, such as leading President Obama’s Equal Pay Task Force and the effort to create implicit bias training for federal law enforcement at the Department of Justice under the direction of Deputy Attorney General, Sally Yates. After this time, I was asked to lead EqualAI based on a background that is currently unconventional in this space, but hopefully will become more common as we understand the importance of multi-stakeholder efforts.
Given that orientation, our focus at EqualAI is on driving multi-staker efforts to ensure our technology platforms are equitable and inclusive. We perceive implicit bias in AI as an age-old issue that is surfacing in a new medium, but now at scale and with graver potential impact.
LP: What a journey you’ve led. We’re curious, do you have any previous examples, where a Conversational AI platform or AI chatbot was not efficient with “anti-bias” engagements?
MV: One of the first and most well-known examples of a chatbot is also one of the examples of the hazards in this field. In 2016, a multinational software company unveiled an AI chatbot, as a machine learning project, designed for human engagement. The company described the chatbot as an experiment in “conversational understanding.” By design, the more one “chats” with the bot, the smarter it gets and can learn to engage people through “casual and playful conversation.” The experiment was indeed educational. The chatbot was quickly taught misogynistic, racist, and discriminatory language. Ultimately, the company pulled the chatbot down within 16 hours of its launch.
There have been similar examples of bots that were allowed to “learn” on social media platforms, given its breadth of content and discussion. Each has had a similar result, prompting poor detection outside of conversations, in visual AI recognition, too. A recent Harvard study deemed “face recognition algorithms boast high classification accuracy (over 90%), but these outcomes are not universal. A growing body of research exposes divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, Black, and 18-30 years old. These compelling results have prompted immediate responses from the tech industry, shaping an ongoing discourse around equity in face recognition.”
LP: It’s easy to see how AI chatbots can “learn” harmful language from abusive conversational intentions. What has inspired you in your career, your life to create a more EqualAI platform for both businesses and consumers, and what does an inclusive conversational platform look like in your view?
MV: We believe we are at a critical juncture because AI is becoming an increasingly important part of our daily lives — while decades of progress made and lives lost to promote equal opportunity can essentially be unwritten in a few lines of code. And the perpetrators of this disparity may not even realize the harm they are causing. For example, our country’s long history of housing discrimination is being replicated at scale in mortgage approval algorithms that determine credit worthiness using proxies for race and class. An exciting innovation in AI deep learning language modeling, GPT-3, is also demonstrating its problematic biases, such as generating stories depicting sexual encounters involving children and exhibiting biases against people based on their religion, race, and gender.
Our goal at EqualAI is to help avoid perpetuating these harms by offering programs, frameworks, and strategies to establish responsible AI governance. A Conversational AI platform has a key role to play because it helps us communicate, frames our conversations, and supports our interactions. If we do not address bias in artificial intelligence systems, our conversational platforms will literally be unable to hear or include various voices. There is also the opportunity where, if we get this right, we ensure that more communities and individuals can participate — and be heard.
LP: We couldn’t agree more, that biases in artificial intelligence must be addressed. For LivePerson, how can our Conversational AI lend to EqualAI initiatives?
MV: Focusing on the principles of responsible AI, such as ensuring good AI hygiene and oversight by senior leadership, furthers the same end goal, to ensure that AI is not only safer for a broader cross section of the population but that more people are able to thrive from their Conversational AI platforms.
To clarify, responsible AI governance includes:
- Identifying which values and biases will be tested routinely
- Articulating the stages of the AI lifecycle development at which the testing will be conducted (e.g., pre-design, design and development, deployment)
- Establishing the cadence for testing
- Documenting relevant findings and the completion of each stage to promote consistency, accountability, and transparency
- Identifying the designated point of contact who owns this responsibility ultimately, including: coordinating incoming questions and concerns, ensuring that responses are consistent and that new challenges are elevated and addressed appropriately.
The FTC guidance and EqualAI Checklist© are two sources for additional guidance on best practices to build responsible AI governance. As you can see, there is significant overlap between the path toward, as well as the end goals, of both Curiously Human™ Conversational AI and responsible AI.
LP: Thank you for breaking all of that down for us. It’s obvious you’ve become an expert in the field of responsible AI chatbots. So, we’re “curious” about what your elevator pitch would be to a brand leader who still needs to be convinced to take the EqualAI pledge?
MV: I’d say that intentional or not, without our intervention, harmful bias can get baked into the design and development of the AI systems that you are currently using in pivotal functions, from hiring, to financial and health determinations. This is going to impact whether you enjoy trust — from your employees and consumers alike. And, if AI governance is done wrong or not at all, it could result in litigation as new and current laws are used to establish liability.
By taking the EqualAI pledge to reduce bias in your AI systems, you are publicly committing to create and deploy more just systems. AI may not be the sole source of our challenges with discrimination, but your pledge is a public commitment to use AI as a tool to reduce harmful bias, not replicate and spread it. You can commit to supporting AI that is designed to serve and protect a broader cross-section of our population.
With the EqualAI pledge, you can be part of the solution to this pervasive and increasing problem.
LP: To follow up from that, are you seeing more interest from more brands and businesses to take EqualAI measures for their AI chatbots and conversational platforms today?
MV: Definitely! We have been thrilled to have an increasing number of companies and organizations reach out to participate in our programs, such as enrolling senior executives in our badge certification for responsible AI governance, taking the EqualAI pledge, and becoming EqualAI members.
LP: The next EqualAI Badge Program is launching this month in coordination with The World Economic Forum. What are you looking forward to most for this launch, and what do you hope to accomplish?
MV: We are thrilled at the impact we’ve been able to achieve through the EqualAI Responsible AI Governance badge program for senior executives, in collaboration with the World Economic Forum. We could not be more excited to launch a new cohort and further grow our community later this month.
For background, we worked with academics, companies, and leading experts to understand best practices and processes to support responsible AI governance. Through these efforts, we created a Responsible AI Badge certification program for senior executives, which we launched last September. Our speakers — thought leaders from major companies within our industry — bring their expertise from practice and from their academic work:
- Cathy O’Neil provides a masterful look behind the AI curtain in Weapons of Math Destruction and as an algorithmic auditor.
- Chief architect of ethical AI design at a major software company, Kathy Baxter, shares the processes she established to determine if an AI product is safe to go to market.
- Heads of Responsible AI at other leading tech companies share lessons to identify and eliminate biased algorithms and related harms.
- Meredith Broussard shares insights about how ‘isms’ infest our AI systems.
- We also have top AI lawyers share legal challenges they encounter and policymakers help us understand the AI policy horizon.
Each of the six monthly sessions is led by AI experts who offer a different lens into how to establish and maintain responsible AI governance, both to reduce liability and to create more inclusive AI systems. Senior executives learn best practices and become part of a community of leading experts and executives in this field.
We are currently finalizing our next cohort and look forward to welcoming these new senior executives into our responsible AI community in just a few weeks.
LP: All of that sounds like an incredible experience, providing win-win conversational solutions and AI chatbots for every industry across the globe. I know that a few of our own executives are excited for the badge program! What can brand leaders do, starting today, to get on board with the EqualAI Pledge?
MV: Brand leaders have immense opportunity for impact in ensuring better, more inclusive AI. They can start by publicly committing to the EqualAI Pledge to reduce bias in their AI systems.
All companies can and should commit to reducing bias in their systems and, likewise, should ask their vendors and partners whether they have tested their AI systems to ensure basic legal compliance.
Specifically, they can ask whether vendors relying on AI systems have tested for discrimination against protected classes. This not only reduces your own liability as a company licensing or buying AI systems, but also is a powerful lever for change. It creates incentives for other companies to ensure they have tested for harms stemming from bias in AI.
Additionally, leaders should ensure there is oversight and a plan in place to test for and root out harmful bias throughout the AI lifecycle.
Our operating thesis is that bias can embed in each of the human touch points throughout the design lifecycle of an AI system. That includes the ideation phase — deciding which problem you want to use AI to help solve — through to the design, data collection, development, and testing phases. Each stage is limited by the experience and imagination of those on that team, which is reinforced by historical and learned biases in the data. But we are also optimistic and think each touchpoint is an opportunity to identify and eliminate harmful biases. As such, brand leaders should input risk management at each stage of the AI lifecycle, and ensure the broadest cross-section of voices and experiences throughout the AI system.
We offer an EqualAI Framework as a general guide with five “pillars” a company should consider as part of its effort to establish enterprise-wide responsible AI governance. These recommendations are particularly important in sensitive functions such as hiring, finance, and health care.
LP: What solutions would you say that EqualAI can provide their business?
MV: The “smart” company, the one who has invested in and is committed to responsible AI governance, has a competitive advantage because they will have a broader consumer base. They will have the trust of their customers who recognize this commitment. They have less risk of being the headline or subject of a costly lawsuit based on discrimination stemming from embedded biases in their AI systems.
Our EqualAI partners have an advantage over their competitors because they have our support in envisioning, building, and implementing responsible AI governance in every stage of their efforts.
Senior executives learn with and join a community of like-minded leaders in our badge program. Legal counsel learn how to issue-spot and establish AI governance systems to better support and protect their clients. Our members participate in events and talks we convene with global thought leaders to ensure they are aware of the most current regulatory developments and innovations in the field.
LP: With all the mentions of AI governance, we’re wondering, how can regulations both help and hinder these solutions?
MV: Often, the first step in our work is helping companies come to terms with the reality that they are now effectively AI companies.
Two decades ago, most companies did not realize that they needed to have cohesive plans and contingencies in place to protect against the unauthorized exploitation of systems, networks, and technologies. Today, however, companies widely recognize the importance of cybersecurity. Much like the trajectory of cybersecurity awareness, companies now need to adjust and understand that they use AI in one or more pivotal functions: hiring, credit lending, healthcare determinations, to name a few. Therefore, they must have a governance plan in place to address the potential discrimination that these systems can dispense at scale.
Regulation can help increase awareness of vulnerabilities and potential liabilities in their AI systems, as well as establish norms so that there is clarity and consistent expectations by both companies and consumers.
LP: What does the future of EqualAI look like in your view, and what’s the next big goal you hope to accomplish with the non-profit?
MV: We look forward to scaling our efforts in the coming year. We want to double the number of people who are learning about responsible AI through our podcast, In AI we Trust, with Kay Firth-Butterfield of the World Economic Forum. We want to relaunch another successful badge certification program with engaged and insightful participants like our current cohort. We want to increase our membership with leading companies who inform our direction and ability to achieve impact.
LP: Lastly, any additional nuggets of value you’d like to add to this statement?
“LivePerson is a proud founding member of EqualAI®, a nonprofit organization and movement focused on reducing unconscious bias in the development and use of artificial intelligence.”
MV: LivePerson has played a key role as we have navigated the best practices we have established as well as our strategy for impact. LivePerson, and Robert LoCascio in particular, was ahead of the curve by investing in raising awareness about bias in artificial intelligence and spearheading this effort. It is this kind of deep understanding of the power at your disposal — both to avoid the spread of bias as well as the opportunity to provide systems that are more inclusive — that result in more impactful products.