Mona Sloane | Julia Stoyanovich
On today’s episode of the RecruitingDaily Podcast, William Tincup speaks to Mona and Julia from NYU Center for Responsible AI about AI and hiring bias.
Some Conversation Highlights:
AI technology in their professional practice. And so there are, as you say, various types of tools that are used, natural language processing system, various kinds of ranking systems. We can talk about that a little later. All kinds of tools that assist recruiters in either processing or assessing people in high volume recruiting situations or in sourcing situations where they need or they feel they have a need for technology to find that very rare talent that the company needs. Very often, those are technical roles.
Generally. I think that as you identified correctly, we find that AI systems become somewhat what we call infrastructural to the professional practice of recruiting. By that we mean the technology get so deeply embedded in the professional practice in making sense of one’s job, one’s way of doing things that we no longer notice that this is going on until it crashes. Now, recruiters tend to be aware of that and use various tools at the same time in various ways and have practices around double checking AI, for example. But the general trend of AI becoming infrastructural to society I think is something we see not just in recruiting but everywhere, including education, criminal justice, various business processes and so on.
Listening time: 28 minutes
Enjoy the podcast?
Thanks for tuning in to this episode of The RecruitingDaily Podcast with William Tincup. Be sure to subscribe through your favorite platform.
Announcer (00:00):
This is Recruiting Daily’s Recruiting Live podcast where we look at the strategies behind the world’s best talent acquisition teams. We talk recruiting, sourcing, and talent acquisition. Each week we take one over complicated topic and break it down so that your three year old can understand it. Make sense? Are you ready to take your game to the next level? You’re at the right spot. You’re now entering the mind of a hustler. Here’s your host, William Tincup.
William Tincup (00:34):
Ladies and gentlemen, this is William Tincup and you’re listening to the Recruiting Daily Podcast. Today we have Mona and Julia on from the NYU Center for Responsible AI, and our discussion today is AI and hiring bias. And I know pretty much everyone that listens to this podcast cares about this. So now we actually have experts that can school us and teach us things. So why don’t we do introductions first? Mona, why don’t you go first, Julia, you go second and one of you introduce the center.
Mona Sloan (01:07):
Great. My name is Mona Sloan. I’m a sociologist in New York University where I’m a research assistant professor and a senior research scientist with the NYU Center for Responsible AI. And I study the intersection of technology and society with a focus on those systems that we call AI.
William Tincup (01:30):
I love it. And Julia?
Julia Stoyanovich (01:34):
Hi, I’m Julia Stoyanovich. I’m an associate professor of computer science and engineering at New York University and also an associate professor of data science. And I direct the Center for Responsible AI. So let me just say a couple of words about the center. This is an interdisciplinary initiative whose goal is to make responsible AI synonymous with AI because we really want to live in a world in which when technology is built and used in society, technology such as AI, that it’s used in ways that advance equity, that serve lots of different people and bring benefits to lots of different people rather than just advantaging a select few while at the same time harming lots of folks in the population.
(02:28)
And so, our goal at the center once again is to make sure that whatever technologies we develop are built in such a way as to be aware of this need to be responsible and socially sustainable. And we do a lot of work in AI policy and regulation as well as in education, including also public education. So teaching people, regular people as well as people who use AI on the job and who maybe were not trained in technical professions, about what AI is and isn’t and what you can and cannot and maybe should not expect it to do for you.
William Tincup (03:06):
I love it. So in the industry that I work in, which is kind of work tech, anything HR, recruiting related, technology related, if you were to go to one of our conferences, it would probably drive both of you crazy because every booth, almost every technology provider says AI, right? Sometimes they mean kind of a conversational bot or ML or NLP or something like that. But it’s kind of marketed as AI and the consumers for practitioners both in HR and recruiting, they don’t really know what that is. So the question is, do you see this in other industries where people use the phrase AI and really they mean something else? And we’ll just start with Mona on everything and then we’ll go to Julia.
Mona Sloan (04:02):
Yeah, thank you for that question, William. So I can speak to the recruiting industry because we’re doing actually a big research project on that at the moment. And I’m talking to a lot of recruiters about how they use various types of technology including AI technology in their professional practice. And so there are, as you say, various types of tools that are used, natural language processing system, various kinds of ranking systems. We can talk about that a little later. All kinds of tools that assist recruiters in either processing or assessing people in high volume recruiting situations or in sourcing situations where they need or they feel they have a need for technology to find that very rare talent that the company needs. Very often, those are technical roles.
(04:56)
Generally. I think that as you identified correctly, we find that AI systems become somewhat what we call infrastructural to the professional practice of recruiting. By that we mean the technology get so deeply embedded in the professional practice in making sense of one’s job, one’s way of doing things that we no longer notice that this is going on until it crashes. Now, recruiters tend to be aware of that and use various tools at the same time in various ways and have practices around double checking AI, for example. But the general trend of AI becoming infrastructural to society I think is something we see not just in recruiting but everywhere, including education, criminal justice, various business processes and so on.
William Tincup (05:56):
And Julia, any color commentary?
Julia Stoyanovich (05:59):
Sure. So I think and I agree with Mona completely, right, that AI is just here and it’s here to stay most likely. And what specific technology we denote when we utter this phrase AI, I think, is irrelevant. I mean anything that is technical that uses data to some extent and that helps people make decisions in some way I think can essentially legitimately be called AI these days. But one thing that we also notice and that I to some extent resent is that the term artificial intelligence or AI is somehow deliberately made in such a way as to just enchant us, right, to really make us believe that it’s something irresistible, powerful, magic essentially, right?
(06:52)
And this is something that as people just in our everyday lives and tasks, but also as professionals in various areas such as HR, we have to resist this. We have to really understand that technology is something that we use. It’s just a bunch of gadgets. They can do whatever they are designed to do, hopefully, but we should not have this magical thinking and attribute magical powers to technology, even if it’s called something as wonderful as artificial intelligence.
William Tincup (07:24):
I love it. So one of the firms in our space is a firm out of Australia, and it is called Rejig, R-E-J-I-G. And the founder, I found her fascinating because when they first started down this path of building AI, they worked with an academic institution to basically audit their AI every six months independently. So I haven’t heard of any other vendors in my space that have done that. They’ve kind of on their own, they built their bit, and maybe they internally audit it. But with Rejig, they pay an outside entity to come in and to make sure there’s no adverse impact. Do y’all, first of all, I mean, it was fascinating to me. It might not be fascinating to you. So do y’all see that more and more in the industry itself?
Julia Stoyanovich (08:30):
Mona, you go first, right?
Mona Sloan (08:32):
Oh, okay. Yeah.
Julia Stoyanovich (08:33):
Yes.
Mona Sloan (08:35):
Well, we will see it more and more next year when we have some significant regulation kicking in here in New York City and where we are also expected to see the kicking in of the EU AI Act, which will have a signaling effect to the United States as well. And Julia will certainly say a few words about that as well. It is very important that we establish a practice of independent audits of AI systems as they are, for example, used in hiring. And as a social scientist, I will also add that it is important that we have a technique for AI audits that includes the assessment of the assumptions that underpin whatever is materialized in the AI, for example, in the model.
(09:33)
So for example, personality is something like that. The reason why we need that, why we need to combine that with established technical methods for AI auditing is because we need a way to weed out technology that is essentially harmful from the get go because it is based on harmful ideas or ideas that are pseudoscientific, for example. So I think we will see that more and more. Julia and I together with the team have done such work. And maybe Julia, I’ll cue you in to talk about that.
Julia Stoyanovich (10:17):
Yeah, thank you Mona. So this is the work that we did that took us close to two years to conduct. This was a large team that included data scientists, of course, but also Mona sociologist, an industrial and organizational psychologist as well as an investigative journalist. And in this work we wanted to understand the validity of claims that are being made by companies, several companies, that are selling products to construct a job seekers personality profile from their resume or social media feed. So this practice in itself is really mind boggling, that you can construct someone’s personality profile, like how dominant they are, how conscientious, how good of a team player they are based on some utterances that they made, let’s say on Twitter, or based on their resume, either the content or the format of a resume. And then this is already, I mean, frankly, a questionable assumption to make.
(11:29)
And it’s questionable also in the domain of IO or industrial and organizational psychology itself that specializes in this, right? There’s a lot of skepticism about whether personality is a valid construct and furthermore whether you can construct it with the help of testing, especially to a sufficient degree to then somehow make a connection between someone’s personality and whether or not they would do well on a job for which they’re being interviewed, right? And so then when you add AI to the mix, that makes it even more magic. And it’s really one of these examples that I’m referring to.
William Tincup (12:05):
Zero plus zero equals zero.
Julia Stoyanovich (12:08):
Yeah, equals infinity plus zero plus AI equals infinity. And so we audited a couple of tools, commercial tools. They are called human AI and Crystal, and they claim on their websites that they are personality predictors. AI based personality predictors are used very broadly in industry by Fortune 500 employers, et cetera, et cetera. And so what we did was we collected resumes of potential job seekers using an IRB, institutional review board approved study. We collected resumes from NYU masters students mostly who are about to go on the job markets to look for technical careers. And then we bought subscriptions to these tools and we called on these tools to construct personality profiles from these resumes of potential job seekers.
(13:06)
And we would record the personality profile that was constructed, a bunch of numbers essentially. And then we would tweak something just minor about the resume. For example, we would take a resume in rich text and we would convert it to pdf. And both of these are formats that these tools accept. They don’t tell an employer to use one or the other. They’re interchangeable. And then we would see what happens and we would observe that in many cases personality profiles would change for the same person.
William Tincup (13:34):
Oh my God.
Julia Stoyanovich (13:35):
Yeah. So this is just one of these examples where we feel that it’s really important to be calling companies on their promises and to be making sure that we’re auditing these systems not only for bias and discrimination according to some meaningful and agreed upon criteria, but also for whether in fact they do whatever that it is that they claim to do in some sense.
William Tincup (14:04):
Because it’s easy to market, it’s easy to say that you do these things. It’s harder to, especially for a practitioner that doesn’t do it every day and might not be as technical, to actually know that it does what you say it does. And this happens actually in my industry all the time. So that’s why I laughed because yes, with hiring bias in particular, years ago I went through a certification for SHRM and they kind of outlined hiring bias in a number of different ways. There’s obviously a lot of different, like me bias, last candidate. There’s just all kinds of different ways to cut bias. So how should we be looking at AI and hiring bias, either ridding ourselves of it or reducing hiring bias? How should we start that process? And I’m in particular, Mona, I’m thinking about the audience of practitioners that might have already bought some applications, conversational bots, et cetera, and they might already be down a path where unbeknownst to them the tools that they’ve picked are actually either facilitating bias or fostering the biases they already had.
Mona Sloan (15:29):
Yeah, that is a great question. I will say that I hear recruiters talk a lot about the problem of bias. So there definitely is a wide acknowledgement, I would say, in the profession, that this is an issue and this is an issue that predates AI. When it comes to bias amplification and discrimination through AI, I think there is a number of questions that people can ask themselves. One is the one that Julia just spoke about that we put at the heart of the audit, which was, what is the assumption that underpins this system? Is that actually something that is not just scientifically valid, but is there a common sense reason to assume that this could actually work? This is one thing. The other thing I think that recruiters can ask themselves is how does the tool create the results that it creates? And just start speculating around that.
(16:39)
When I interview recruiters, and by the way, this is an ongoing study, so I’d be very happy to speak to more folks who are maybe listening to this. But one of the things that folks say when I ask that question, do you sometimes wonder how this ranking was created or why the suggestion is being made or whatever it is, they either say, not really because I’m so busy that I have to keep on going and fill this role and there’s real pressure. Or they say, well, I’ve always wondered, and when I play around with the tool, this happens, when I play around with this differently, that happens. So I think starting to ask that question makes recruiters more savvy tech users, and as we hopefully will see regulation mandated transparency interventions in this space, we will be able to empower recruiters to not just address bias in the system, but address bias in the wider sociotechnical system we would see in recruiting and access to the labor market at large.
William Tincup (17:53):
I love that. Julia, anything to add?
Julia Stoyanovich (17:54):
Yeah, I’ll just add actually a quote from, and maybe I will misquote, from a talk that I heard the other day by this really wonderful computer scientist who is also very well aware of the social impacts of the technology that we build. His name is Moshe Vardi, he’s on the faculty at Rice in Texas. And he reminded us that there’s the saying that those who forget history are bound to repeat it. And we always say this in the context of wars reoccurring and then similar type events. But what we are seeing with AI today is that those who remember history are also bound to repeat it. And this essentially is why bias in the data, so data, it reflects the world. It’s a kind of picture of the world and it may be imperfect or more or less perfect, but it’s essentially a reflection of what the world is like today.
(18:51)
For example, if there are no women, let’s say, as CEOs at companies, then we’re not going to have any women CEOs in the data set on which we train that AI that for example parses resumes and then makes suggestions to people as to which jobs they would be well qualified for. Well, it’s not going to tell a woman that she’s qualified to become a CEO because she never seen people like that. So if we remember that in our data and if we train our machines using that historical bias, then it just reproduces itself, and even more insidiously because AI is magic, right? Then people tend to trust it. So they trust the predictions.
(19:32)
So when you show to an HR person recommendations of people who AI identified as really good matches for a position, over time, that person will come to believe that in fact this is what a good candidate looks like for this position. So this bias somehow from the world through the machine, it then goes back into people’s minds and then it poisons the world further, right? So this history, it repeats itself and this is the most insidious.
William Tincup (20:02):
Yeah. Oh yeah. I think especially in our industry, there was a initial fear, probably still a fear, that we were just going to take our biases and then basically weaponize them with AI. So we already weren’t going to fix our biases, we weren’t going to try to mitigate it. It was just like we already kind of acknowledge that there’s bias in hiring and now we’re going to use AI to then further that, which exactly was one of your points. You mentioned something that’s going on in Europe and I wanted to ask both of y’all a question, because a lot of folks in our industry are equally interested in data privacy. So GDPR in Europe and kind of a candidate owning their data is really important in Europe.
(20:53)
Hopefully it gets to the US at some point. First of all, do you see things in Europe that are, I mean, I see GDPR as better than what we have in the States, and I’d love to follow, I’d like for GDPR to be here, but I’m just one guy. So question is, is there a relationship between what you see in data privacy and AI?
Mona Sloan (21:26):
That is a great question. I am not a data privacy expert. I will say that up front. I will say though that the GDPR has some fantastic concepts that it promotes one of them being the right to be forgotten, which you just mentioned. And really seeing individuals as the owners of the data that is created on them. I would however sometimes question the ways in which data protection and privacy is then operationalized through GDPR compliance techniques. In other words, I would sometimes question if the many buttons that we click when we are in Europe and want to access a website that list all the data brokers that potentially are going to purchase our data, if that is an actual meaningful translation of the spirit of the GDPR. I think we still have some way to go in order to make that easily to understand for users so that they can not just give consent or withdraw consent, but do that in a meaningful way, in a way that they actually know what’s going on rather than just making some frankly random decisions that could have consequences down the line that they don’t understand.
William Tincup (22:52):
Well, they just blindly trust something.
Mona Sloan (22:54):
Yes, absolutely. Right. And so we shouldn’t, when we think about the ways in which we design transparency and compliance around AI, we have to have that in mind and really drive home meaningfulness, I think. So I think if there’s something to be learned from data privacy and data privacy in the EU with regards to GDPR specifically, I would say that’s the one. But I’m curious to hear Julia’s thoughts.
Julia Stoyanovich (23:30):
Yeah, thank you Mona. So I will just continue really on the argument that you’ve been making and I want to bring this conversation closer to home, especially to our home in New York City, where as you mentioned at the beginning of our conversation today, Mona, we passed a law. This is local law 144 of 2021 that is going to start being enforced in January of next year of ’23. And this law is aimed specifically at bringing accountability to the use of automated decision systems or AI as we’re calling it today in hiring and employment.
(24:11)
And there are two components to this law. One of them is that companies that are going to be using, employers that are going to be using AI for hiring or employment related decisions, they have to ask the vendor to provide them with results of a bias audit that was conducted on that tool. So there’s this bias audit component, and maybe I won’t talk too much about it, but instead what I would like to discuss is the second requirement, and there is job seekers have to be notified when an AI is about to be used to screen them. And this is before it’s used. And they need to be told what features specifically of their applications, what qualifications, characteristics will be used in the screening by this AI.
William Tincup (24:58):
What’s fascinating to me, Julia, about what you’re talking about is kind of the juxtaposition, both my kids are squarely Gen Z, and they have no difficulty at all chatting with bots. They know it’s a bot. And you’re right, they blindly trust that the bots got their best interest in mind. And I’m worried about that going and crossing over into the job market. That conversational bot pops up and it’s basically parsing your words and parsing your experience, et cetera, and then making a decision. And again, that could be a great decision or it could be a horrible decision. So do you see anything, not generations, but do you see anything with the different types of candidates, that they’re more apt to trust AI than maybe those that are a bit more skeptical?
Julia Stoyanovich (26:00):
Let me actually respond to that question a little bit differently, and that is, candidates currently don’t know whether AI is even being used to screen them, so they have no idea they can even decide whether to be skeptical or not. So what I think we need really in the space are ways to tell people that they can opt out, that they can ask what data about them is being used, that they can contest, speak against the use of particular data in the screening. For example, I may say, I’m not giving you your employer, your future employer, permission to look at my Twitter feed for the purpose of determining whether or not I’m a good fit for this position. And this is data protection. It’s control of use.
(26:47)
People also should have a recourse, right? Once they know what features are being used, what qualifications, characteristics are being used by the employer to decide on their candidacy, they can say, this is an irrelevant feature that you used, or it’s something that is discriminatory against me because you’re asking me to tell apart red balloons from green balloons and I’m colorblind, so I actually cannot do this, right? So I think that we need to give people both power to decide whether they want to engage with a system like this because if they know that somebody’s going to be parsing their resume for signal of how dominant and neurotic they are, let’s say, right? Maybe I’m not even going to apply. And then on the other end of this is recourse, actually being able to do something about these decisions.
William Tincup (27:33):
Getting back to things that both of you have mentioned, it’s transparency in the process, allowing candidates to understand how they’re being judged, and then to your point, recourse in case they feel that they were judged incorrectly. Then there’s a mechanism way. I could talk to y’all forever. Good God. I know y’all got other things to do. Let’s stay in touch because this is obviously, we’re just scratching the surface, and I love the work that y’all are doing. So thank you so much.
Julia Stoyanovich (28:04):
Thank you very much. Thank you for speaking with us. It was a pleasure.
William Tincup (28:07):
Absolutely.
Mona Sloan (28:08):
Thank you.
William Tincup (28:09):
Absolutely. And thanks everyone for listening to Recruiting Daily Podcast. Until next time.
Announcer (28:15):
You’ve been listening to The Recruiting Live podcast by Recruiting Daily. Check out the latest industry podcast, webinars, articles, and news at Recruit…
The RecruitingDaily Podcast
Authors
William Tincup
William is the President & Editor-at-Large of RecruitingDaily. At the intersection of HR and technology, he’s a writer, speaker, advisor, consultant, investor, storyteller & teacher. He's been writing about HR and Recruiting related issues for longer than he cares to disclose. William serves on the Board of Advisors / Board of Directors for 20+ HR technology startups. William is a graduate of the University of Alabama at Birmingham with a BA in Art History. He also earned an MA in American Indian Studies from the University of Arizona and an MBA from Case Western Reserve University.
Discussion
Please log in to post comments.
Login