Designing Ethical AI — Practices & Processes

Designing Ethical AI — Practices & Processes

We recently had an interesting conversation with the Senior Designer on IBM’s Design for AI team, Milena Pribic. We did a deep dive into different topics revolving around ethical AI and design practices and processes.

She also shared inspiring details about her career journey so far, the benefits of coming from a non-traditional background in this space, and savvy tips on how folks from underrepresented communities can thrive in this space. This interview has been edited for clarity.

You can listen to the podcast or read through it below.

Mia Dand: Hello. I’m Mia Dand. I’m the CEO of Lighthouse3, an emerging consulting technology firm based in Berkeley, California. I’m also the Founder of Women in AI Ethics, a global initiative to increase representation, recognition, and empowerment of women working in this critical space.

Today, as part of our partnership with IBM, we have invited another brilliant woman - Milena Pribic who is a Senior Designer on IBM’s Design for AI team and the Official Design Representative for AI Ethics at IBM.

Milena helps teams and clients establish and maintain ethical AI practices by running design exercises and co-creating resources with researchers, designers, and developers.

Milena, welcome. We are so excited to have you here!

Milena Pribic: Thank you so much for having me. I’m really happy to be here.

Mia Dand: Great. So let’s get started. Can you share with us how you got started in your current role? Where did your journey in AI begin?

Milena Pribic: My journey was a convoluted one. I started working at IBM initially as a front-end developer, I had done a front-end development bootcamp back in 2015 and came to IBM as a front-end developer and then sort of naturally switched over to design and UX. Because I was learning development at the same time, I was learning design, and there was a real need for design resources on the teams that I was working on. And then through that design work, I had a lot of support to go to out route instead of development.

And so I ended up doing that on some different products and one of those products or the initial one where I started really working with AI was for Watson Education. At the time they had been launching an AI tutor. So through my work, my design work on that AI tutor, a lot of my job was around creating the synthetic personality of that AI tutor so that it was engaging to the students.

What we found, as we were creating this persona and this engaging back and forth, was that the students started to really engage with the tutor in a way that they weren’t fully aware that they were interacting with an AI. So there was a lot of sort of slippery slope, confusion of them potentially thinking that they were talking to a real person.

As a designer, I was curious about that and what our own responsibilities were as designers, for transparency, and ethics, and best practices. So from there Francesca Rossi, who is the Global AI Ethics Representative at IBM, my now manager, Adam Cutler, a Distinguished Designer for AI at IBM and myself — the three of us put together Everyday Ethics for AI, which was one of our first public AI ethics resources meant for teams.

We pushed it out in 2018 and from there, there was the development of an AI Ethics Board at IBM going on at the same time. And you know, this opportunity to really highlight design and design thinking methods and addressing ethics. So my job naturally pulled towards that direction and then ended up just doing what I’m doing today, which is design practices and AI design, but specifically through the lens of best practices for creating AI-human relationships that are healthy and addressing ethics and ethical practices on teams.

Mia Dand: That’s wonderful. Thank you for sharing that. And we do know Francesca Rossi well. We’ve had her on our podcast previously. There are so many amazing women working in this space at IBM.

Mia Dand: Yes, she is! Now we talked about transparency because it’s unethical to be deceptive and mislead people, especially children as to whether or not something is AI versus human. So what is the official definition you use for Ethical AI in your work?

Milena Pribic: Yeah. So when I’m thinking about my work, I’m thinking about the ethical principles and the values that IBM is aligned with. That’s the umbrella that I operate under. Ethics itself is a set of moral principles, which help us discern pretty much between right and wrong. You know, what’s that line?

At IBM we’re aligned on these five different areas of explainability, fairness, privacy, transparency, robustness. So whenever I’m talking to people about AI ethics, I’m talking to them basically along those five areas.

What I specifically do that’s differentiated is that since I’m in design and I’m a designer, AI design ethics is how we’ll differentiate IBM from the other AI ethics work going on. In that, it’s more of a set of practices and resources that’s really rooted in human-centered AI design and driven by team collaboration. So we work with all these different roles; designers, data scientists, developers, to really address our own behaviors on our team. So it’s more of this rooted understanding of ethics going beyond just tooling. We wanna make sure that what we’re doing is touching on all of these areas, but is really materials and resources that are focused on that team building and changing those behaviors for the better.

Mia Dand: I’m absolutely thrilled to hear that. I don’t think we do enough when it comes to thinking about the human. When we talk about AI, it’s just so much more focused on hype when it actually should be about what’s best for human beings and how to build human-centric technology.

I’ve read your Reddit AMA, and saw that you use psychology in your work. We’ve had conversations with anthropologists and archeologists like Beth Rudden, who is also at IBM and it’s fascinating to hear about their unique approach. I love to hear how you use psychology in your work.

Milena Pribic: Beth also has some really cool metaphors and references that she brings up and she’s a data scientist. Anyone can bring up these different topics or anything that’s rooted in more of like a humanities flavor.

There’s a lot of literature that talks about AI through the lens of human psychology and behavior. I think that excerpt that you’re speaking about is probably from applying psychology back when I was working on that AI tutor, the synthetic personality architecture, specifically mapping this human-AI relationship we were building to three different broad stages of child development.

So there’s a difference in terms of, okay, we can talk about what we’re seeing through the lens of something like psychology, but we can also take something that isn’t even, you know — the conversation isn’t already on AI or on tech, it can be a totally different phenomenon that we’re witnessing and we can translate that to just being able to better understand the way a human might be reacting to something that we’re doing with AI.

So it’s less about copy-pasting, like, theory and making it work for tech, but it’s more about leaning on these different disciplines and areas of research to understand more or you better ask questions about this human behavior.

I’m not naturally like, a techy person, right? I learned that on the job. I learned the technical aspect of what I’m doing and the products that I was working on through the nature of my work at IBM, but I mean, I’m a humanities student through and through, and I really leaned a lot on those parallels to really make sense of the work that matters and the experiences that are actually successful within AI.

Mia Dand: I couldn’t agree more. I feel like there is such a need for a multidisciplinary approach to the space and that not being “techy” is a virtue. Techno-solutionists don’t consider if this will work for humans rather than saying here’s some technology and let’s just try and plug it to solve some societal problem even it isn’t even a good fit and doing it in a flawed way. So I’m so happy that your humanities background is contributing to more successful outcomes in AI.

Milena Pribic: Yeah. And, you know, it still means with the non-traditional background, that you can also be an analytical thinker and you can draw on all of these different things. And it just basically adds to that holistic like successful diverse team with different perspectives where maybe you can end up on the same — with the same conclusions, but you might see things through a slightly different lens and then that changes your perspective or that changes the shape of what you’re creating together.

Mia Dand: Absolutely. So much of the technology that’s out there in the marketplace right now seems to just have so many ethical gaps. It really is a reflection of how homogenous this industry is that having the multidisciplinary lens and having folks from different backgrounds like yourself and so many other women at IBM, I feel like it really brings that — like you said — holistic perspective so that we are not having these obvious ethical gaps in the systems that we have today.

So that said, operationalizing these technologies, in theory, sounds great. We want to do the right thing and do it ethically. And from a design perspective, operationalizing is a challenge unto itself. So can you walk us through like some tools and frameworks that you use during your design process to ensure that there’s fairness and there’s bias that’s being addressed as you’re building these systems?

Milena Pribic: So there’s different levels to it, right? Our team works with either our internal product teams or as part of some client work. When we work externally, the different levels include — what are you doing on your team? What are you doing on your product? And what are your ongoing behaviors and your adaptations over time? What kind of practices are you holding that actually create this environment where fairness isn’t just checked off a list?

IBM has awesome tooling around fairness and bias like the AI Fairness 360 framework. But, like I always say, you can have the best tools in the world and it won’t do you any good if you can’t tell me why, when, how you’re using that. Do you really understand the context? Do you understand the repercussions?

Sometimes folks that don’t have as much interface with an end-user are the ones in charge of building 80% of the technical tooling. It’s hard to be able to speak that same language.

We’ve done projects where we’ve literally put the data scientists and the designers together. And the project is just to put together a glossary of common AI terms that both the designer and the data scientists agree on and make sense of.

So that’s part of it, right? Making sure that your team is understanding all of these concepts in the same way. Obviously, making sure that your team itself is inclusive and we leverage resources around diversity inclusion from within IBM like sharing the work by the Racial Equity by Design group along with everything that we have, evenly, just to make sure that the teams themselves like inclusivity — Inclusive representation is one thing, but in order to be truly inclusive, we need to have inclusive participation. So that’s something that will come up when we start working with a client or a team. It’s like, well, who’s in the room? What shot do we have to actually make sure that we’re ensuring fairness and addressing bias if we’re not doing that in this very room?

And so the actual general ethics-related design exercises that we’ll do in terms of creating AI, it could be an exercise or two that lays out tertiary effects of what you’re designing and what you’re creating or stakeholder maps that will call out potentially someone who’s been indirectly affected by AI that you haven’t initially thought about.

We have full workshops focused on fairness, sometimes. Sometimes it’s more, in general, to understand what that means for a particular product or for a particular area. Sometimes it’s more specific like designing for protected attributes.

The understanding of bias in a lot of cases is pretty limited. People look at maybe a problem that they’re having and they think, this is bad data. And so step one is really breaking that way apart and showing people that in most cases, that doesn’t just mean bad data, change your data. If you can’t be reductive about it, it requires addressing all of these different arenas. And so we’ll just do that with those workshops and those engagements that we’re making sure that we’re not just covering the technical aspects of it, but we’re covering how do you interact with your team, how do you check in to make sure that you’re tracking some of those ethical repercussions or possibilities, what does a healthy practice look like throughout that whole process of AI creation.

Mia Dand: I’m glad you mentioned this desire for reductive solutions because people like simple solutions that are like using a tiny band-aid for a huge wound. So having this comprehensive approach like you described, with checks and balances, with steps identified and defined is so critical.

Milena Pribic: And it’s really crucial to make sure that yeah, we’ll do these exercises and we’ll have this kind of moments where we’ll have these “aha” like didn’t realize that these moments of clarity, but just to understand what we’re supposed to be doing. But there has to be also a to-do and some ingrained next steps from it. That’s non-negotiable. It’s not like this separate thing that we’re doing. That’s an ethics exercise or an ethics workshop. Like just the same way that you have these technical to-dos, you’re gonna have to-dos around making sure that you’re addressing these ethical issues on the same level.

Mia Dand: Plus one to everything you just said. Can you share a specific example or recent issue that you might have helped the client with? Just curious, when you come across a specific problem, like how you walked the client through it, and like you said, were there any aha moments that you can share with us just to make it real for our audience.

Milena Pribic: Yeah. I’ll tell you about a recent one that was pretty interesting. It was with an international bank that was interested actually in creating their own AI ethics governance model. And I thought this was a really interesting use case, cuz this is different from what I’ve been working with and on for the past couple of years that this tide is sort of turning and people are realizing the gaps in their own understanding and they’re seeing the success of pivoting your focus and your emphasis on ethics.

So this group, in particular, to even come to us with that as a desire, they’re already more advanced in terms of understanding the importance of what you were saying, the cohesiveness. And they had some stops and starts, but this group was really well-motivated and there were a bunch of self-starters.

So what my team did is we just created like, a rough chart based on something we did for IEEE, for the Ethically Aligned Design for Business Committee, an AI Ethics Readiness Chart.

And as a group, we didn’t have any formal design thinking framework. We just sort of altogether for a couple of hours a week, over a period of like, every other week, maybe a month or so. We just talked through it and we looked at what this group’s gaps were, who they needed to speak to, what kind of stakeholders were involved in something like this, why they were invested in it, what their story was.

And what’s interesting to me from that is that when you’re working on AI ethics is that a lot of that work comes from the ground up when you’re building that momentum. And in this case, I guess in the climate now where maybe you might have some more top-down support now that it’s more of an issue and it’s spoken about more.

Again, if someone’s coming to us with this, they’re already ahead and I think making sure that this group that’s sort of working from the ground up, but understands where to pull that executive support to be able to push something like this through to create their own AI ethics pillars, their own AI ethics practices, and principles.

That experience was really useful to us. And we templatized it. We went through and we thought, this was so interesting to see how to support this group, how to watch them sort of think through the way that they were wondering how they would bring in design and development and data science from different parts of the company.

Again, we templatized and we wanna just keep using that with clients and that sounds like, you know, the future.

Mia Dand: Absolutely. We did a project recently where we had the different people in our group go and interview folks within their own stakeholder circles and see who among them is actually ready to undertake this process. So you’re right. Your client is ahead and in a much more advanced state because most of the feedback we received was there are companies out there who are just not ready at any level.

Milena Pribic: Absolutely. And it is sort of an anomaly, but it’s one that’s worth bringing up because it gives you hope that this is catching in a way that’s formative and transformational to companies.

In my experience, usually, someone is just interested in a piece like what we were just talking about, the understanding of maybe data bias is limited. And you can’t always, as a designer or someone coming to a client, meet them at the same place that you are. So sometimes we’ll just meet them where they think the issue is and we’ll just show them the benefit and the inherent good that comes from addressing the problem holistically, for their sake in many ways.

Mia Dand: Absolutely. I feel this mad rush to just throw things out there and push things out there is really harmful to the communities and to the end-user, but also organizationally this just can cause so much reputational damage. So it gives me hope really to see companies who think we are not going to wait for something bad to happen and we will proactively take steps to address this. Thank you again for sharing that ray of hope and sunshine for us.

You mentioned the AI Ethics Readiness chart, which is on page 13 of this helpful guide to help with those starting their first AI project. Having this guidance around whether or not your organization is ready is helpful for moving this conversation forward as well.

So, circling back to what we started this conversation with and what we talked about originally is this need for a more holistic and multidisciplinary approach to artificial intelligence to the space because the tech industry has been notoriously lacking in diversity.

And it’s just not specific to artificial intelligence, but overall it’s dominated by the stereotypical notion that an engineer or an expert is a male engineer. So can you highlight some of the key benefits of having professionals from non-traditional backgrounds like yourself, like women non-binary folks coming from, this design backgrounds, psychology backgrounds, and others? Like, what specific benefit or value do they bring?

Milena Pribic: If our focus is really on humans, I can guarantee you that someone with a philosophy degree is going to have valuable insights on some human interaction I’m witnessing in regards to — in AI, like some human-AI interaction. Someone’s gonna have this insight that could change the nature of what we’re creating the same way that an engineer can teach me about the technical aspects of what we’re doing and help me solidify what the design of this AI might be, vice versa. You know, designers can give engineers or anyone from a technical background, that context that I brought up in the beginning and deepen it and widen it.

The more diverse those backgrounds are, the more you have that lived experience, in a sense, the lived experience, but also you can see things that maybe some folks can’t. And I think being open to that through the design process is really important too.

An example is I studied urban studies in school and you can study how power dynamics between different groups in a city Literally shape the physical space that you’re in.

The same thing applies to AI, right? How do power dynamics between different groups shape the technology that we’re creating? How do they affect different users of that tech? Another example, there’s a neuroscientist that I’m working with, who has a specific understanding of how the use of language affects someone’s mental model of AI, these different backgrounds. They really allow us to dig deeper on some of these general areas like explainability or fairness and see those things, you know, through this deeper perspective.

And I hope people at this point, you know, understand that a non-traditional background and a willingness to learn, and make connections is more existing as a strength; And at least in the places that you actually want to be working.

Mia Dand: I couldn’t agree more. I feel like so many of the headlines that we see these days about why a certain piece of technology didn’t work or it ended up doing more harm than good, it always boils down to the fact that there wasn’t someone in the room with the right background, like you said, some philosophical background, somebody who understands power dynamics or someone who understands how community works, somebody who doesn’t understand the psychology of users and the list goes on.

Just not having diverse views in the room when these solutions are being developed has caused so many of the harms that we see in everyday life.

Milena Pribic: That goes back to what I was saying earlier, where when we start a workshop, it’s like, look around the room. Who’s in here?

There’s great exercises about that, also online. In terms of, hey, let’s lay out everyone’s background, everyone’s ethnicity or race, like, just to make sure that we really do, if we’re talking about these subjects or if we’re talking about bias specifically that we can have that conversation to the highest extent of this group is willing to have it.

Mia Dand: Absolutely. I feel like just the definition of even diversity, the way we define that has to evolve and shift to include women from different backgrounds and just not just race and ethnicities, but also having just fine women, all of them engineers in a room is not enough unless you also value the different experiences they bring to the table and the different non-engineers as well. So there needs to be this balance.

Milena Pribic: And something on the power dynamics front that relates to what you were just saying. Adjacency to power is one thing, but you have to make sure that those voices have the power to make those changes, to change those decisions, to have their say in terms of what happens. It’s that inclusive participation that’s what actually changes the direction of technology in the way that it’s truly inclusive.

Mia Dand: I couldn’t agree more. It’s not just about having people in the room it’s also whether you are even listening to those people and what they are bringing to the table, and are you just rewarding conformity? And just everybody saying, yeah, yeah, this looks good. But what you really want is for people to speak up and add their perspectives from the different backgrounds that they represent.

That takes us to our last question. You yourself represent both a professional background that’s underrepresented typically in the tech industry, but also as women, non-binary folks are historically underrepresented in tech.

Do you have any specific examples from your own experience, either things you did, which were helpful, or things that you wish you had done? Because you did say the willingness to learn is a big part of it, coming from an underrepresented background is a big part of it. But how do you translate that into what the audience — maybe it’s your customers, your clients, your employer is looking at? Like, how do you convert those into things you can put on your resume, say, yeah, this is what I bring to the table, or any other tips that you can share with our audience?

Milena Pribic: I’m thinking when I first started this work, it was really about thinking about my own job, what I was doing and what was relevant, a question that was relevant to that job that just, you know, happened to plug into these larger themes. And it was about asking questions and devoting myself to answering that question or creating that resource that filled that gap that I was missing.

In my case, it was with the tutor. As one example, it was filling that gap around what should I do around explainability and transparency? Who do I need to talk to to make sure that we do come up with a perspective on this or a starting point? At that point, it was a starting point. It was the first step to just get this conversation going on our team and, you know, a solution from there.

I’d say also, as far as general advice, is to not think twice about reaching out for a mentorship where needed. You never know, you might think someone’s super busy or someone that you look up to or that is involved in this space wouldn’t have time to meet up. But even I’ll do office hours every once in a while for 45 minutes or so with students and a couple of them I’ve pushed the way of IBM.

There’s a lot of design thinking methods and activities that we use intentionally that are public and available. If you don’t have a foot in the AI industry, a lot of those exercises are AI agnostic.

You can look at other types of tech and go through those exercises. So pretty much starting where you’re at. Understanding what your question is, whether it’s regarding your job or something else regarding your life.

Power-literacy.com is when I was sent recently, designethically.com. A lot of our exercises that we use In terms of ethics and laying out different scenarios come from designethically.com. All Tech Is Human is a great organization. They have talks, they have ways to get involved, they have a responsible job board. Feminist.ai, workshops mentorships and courses from them. And obviously Women in AI Ethics. Just there’s those communities, I think, where folks have already started to collect that are open and should be leaned on as far as just getting smarter in a sense, and finding your own community and folks to bounce ideas off of.

Mia Dand: Thank you so much for the shoutout. I agree. All the organizations that you mentioned are doing amazing work.

What you mentioned about the importance of mentors — Women in AI Ethics launched a pilot program just at the start of the pandemic because so many women were struggling and they lost their opportunities — lost their jobs, their job offers were when their job offers were rescinded. Also, those who are trying to get you into schools, like they were not able to get a visa. So they were again, not able to travel because of the pandemic. So it was quite a terrifying or traumatic situation. And we weren’t just stunned at how many women stepped up to help others just to guide them through these unprecedented times, because nobody had witnessed anything like this, just the way everything just shut down.

And we still get emails and notes from folks who said that this was the most helpful experience and just the most touching experience with their lives and just like how it helped them in so many meaningful ways. So in other words, like there is definitely a lot of value in solidarity in our communities.

And there are just women — and women like yourself. I love that you have office hours where you’re guiding students because that’s so critical for someone who’s just starting off in space, but also women who are just generally — Work from home is great, but you also tend to feel in this digital world, very isolated. So I can’t understate the importance of the community that you’re fostering.

Milena Pribic: At the end of the day, people just need to have their questions validated by someone like this is a worthwhile question to spend time on and hear some resources that you can look to answer this question for yourself, or, you know, this is a, a problem that you’ve identified somewhere and you have the power to go through and come up with a solution or come up with a point of view on what’s going on.

Mia Dand: Absolutely. I just feel that human connection and just having someone experienced helping you out, and you have all different experiences that have lived or professionally it’s more powerful than a Google search and it’s more powerful than AI.

On that note, I just want to thank you so much Milena for taking the time to talk to us and sharing your expertise.

It’s a crazy time right now around the world, especially in the United States. It just gives me so much hope and optimism to be able to speak to someone like you, who is making such a significant difference in your own way in this space. So thank you again.

Milena Pribic: Thank you so much. I appreciate everything you’re doing.

Images Powered by Shutterstock