Insights

Beyond Bias: Practical Solutions for Ethical AI

Written by Chris Geison | Nov 15, 2018 8:00:00 AM

On November 7th, Kathy Baxter, Architect of Ethical AI Practice at Salesforce, joined me at AnswerLab’s San Francisco office for the latest installment of our Salon Sessions, a breakfast series we’ve been hosting for the local UX community to share ideas and stay informed about emerging topics in user experience. We intentionally keep these events small, so you can engage in deeper conversations, hear the good and the bad on every topic, and share stories in a closed setting.

We invited Kathy to join us so we could dive into tactical recommendations for building an ethical AI strategy. While Kathy is best known as the co-author of Understanding Your Users, a canonical text in user research, she’s rapidly becoming known for her recent work on ethical practices in AI. Her insights are noteworthy in that they don’t just touch on ethical lapses, they focus on practical broad-based solutions. Here are some of the highlights from our conversation. 

Despite the rise in awareness of AI’s ethical risks, we keep seeing examples of tech companies releasing AI-enabled products where it appears nobody considered the ethical implications. What is going on?

One of the issues has been hubris. There’s this assumption that we’re out to do good, and the decisions that we make are therefore good.

I recently got into a debate with someone who was predicting that everyone would soon have their own digital assistant to schedule meetings for them and complete tasks on their behalf, and he cited a stat about how many people have smart speakers in their homes. But I’m from rural Georgia and I can tell you there are still large swaths of the country where people don’t have access to the internet. [39% of rural Americans lack access to broadband.] So companies can distribute free Chromebooks ’til the cows come home, but there are parents that have to drive to the parking lot of the school or the public library where there’s Wifi so their kids can do homework at night. It’s that lack of awareness and these assumptions we make that’s harmful.

In your Medium post, How to Build Ethics into AI-Part I, you said a Chief Ethical Use Officer is not sufficient and then a couple weeks ago, Kara Swisher wrote an article in the NY Times that mentions Marc Benioff is looking to hire a Chief Ethics Officer. What are your thoughts about that role?

Having one person who’s responsible for the ethics of an entire company or just having a Code of Conduct is in no way sufficient. You’ve got to have each employee really understand what their role is in this and believe in it. There are companies that are trying to force their engineers to adhere to a Code of Conduct and the engineers are pushing back. They say, “This is not my responsibility—I just write the code.” […] You have to put together the resources for people to do the right thing and then you have to incentivize it. If the only thing employees get rewarded for are launches, clicks, user engagement, and financial gains, that’s all you’re going to get: clicks and money. Having a Chief Ethical Use Officer, with budget, people, and a mandate can help put these things in place. You do need somebody at the top who can give the resources to it.

You mentioned incentives that go beyond clicks and dollars. What do metrics look like in an organization that is integrating ethics into their model?

That is the one thing I have not seen in all of these discussions: what does success look like? Let’s imagine we have these ethical companies, teams, and products. What’s the before and after look like? How do you measure that?

I’ve got this running list of ethics checklists, frameworks, toolkits, etc., and as I’ve reached out to the creators and asked them, “how do you measure success if I were to implement this?,” nobody has had an answer, nobody is measuring this. That is one of the things that I’m trying to bring people together to discuss.

We’ve talked about corporate governance, but what is the role of government oversight, regulations, and agencies when it comes to AI?

GDPR has been huge in getting US companies to start moving in the direction of things they otherwise never would have done, and we need something similar on the AI side, but it needs to be technology-specific. Having one agency that oversees all of AI is just nonsensical. If it’s healthcare-related, then the FDA should look at it, if it’s transportation-related, then the Transportation Department should look at it, if it’s financial services, the FTC should look at it. [...] We need the right agencies with the right expertise if we’re going to see meaningful guidance of this technology.

We’ve talked about the role of government in AI, what is the role of AI in government, and what should it be?

Public sector use of AI has been the scariest to me. [...] We know COMPAS, the parole recommendation system, makes determinations based on biased data and affects minorities to a greater degree. Allegheny County in Pennsylvania purchased a system to identify children at the highest risk of child abuse, but that system was outlawed in New Zealand because it was racially biased.

One of the things I’m passionate about is working with the World Economic Forum to develop a toolkit that helps public sector agencies evaluate AI systems during the procurement phase, understand how to implement it, and then determine if the system is doing what they intend for it to do. If we don’t help government agencies do the right thing, we’re going to see this further erode people’s human rights.

Speaking of the World Economic Forum, when we look beyond the US and Western countries and cultures, how do we ensure that we’re not being paternalistic or instituting a new form of colonialism by imposing our own ethics in other cultures?

There are no easy answers and I hate to say that because it feels like a cop-out, but we have to be transparent, we have to have these conversations, and when a decision is made, we have to be transparent about why that decision was made, understanding what the consequences are, and identify anything we can do to mitigate any harms.

We’ve talked about some fairly large scale challenges and solutions. What can we do when we go back to our desks after this? What should we do to evangelize ethical practices in our own organizations?

For me, it was a matter of trying to do grassroots education: reaching out to the teams that I had close relationships with, educating them, having a success there, then going on to the next group and pointing back to the previous success, and then spreading, spreading, spreading. And eventually, people started reaching out to me.

It’s about being able to find allies, create small successes, work out from there, and just promote the bejesus out of what you’re doing. Promoting your work can be uncomfortable but you can’t just expect people to stumble upon the work you’re doing.

Kathy also described the challenges of creating AI products for others’ use. Because Salesforce is a platform, they can’t see or affect their customers’ data. If their customers are making biased decisions, then they train the model to make biased biased decisions.

I work with data scientists, engineers, product managers, and UXers to figure out how to help our customers implement what we build ethically. We think about how we can create a UI that flags to a bank, for example, ‘you’re using race or zip code in your model, therefore you may be making biased decisions.’ But for a pharmaceutical company manufacturing prostate cancer drugs, if they’re using gender, that’s not biased and we don’t need to flag it. It’s about giving the tools to our customers to be able to implement our tools ethically.

If you’re looking to read more on this topic, check out Kathy’s articles on Medium. You can also find more thought leaders to follow on the recently published list, 100 Brilliant Women in AI Ethics to Follow in 2019 and Beyond by Mia Dand, who also joined us at this event as an attendee. If you’re interested in ethics and AI, this list is a treasure trove.

Interested in attending events like this in the future? Sign up to hear about upcoming events.