Hiring to Firing Podcast

The Pros and Cons of Generative AI in the Workplace: The Matrix

Episode Summary

Troutman Pepper's Tracey Diamond, Evan Gibbs, and Alison Grounds discuss the movie The Matrix, risks and benefits of generative AI, and guardrails to consider putting in place.

Episode Notes

For better or worse, generative AI is everywhere. Many companies are asking themselves: "Do we run from it or embrace it? What role can generative AI play in the workplace, and what should we do to stay ahead of the curve?"

Partners Tracey Diamond and Evan Gibbs hosted their first ever video recording of Hiring to Firing in front of a live audience at Troutman Pepper's Inaugural Private Equity Perspectives Summit. Watch the video or listen in as they discussed the movie The Matrix with Alison Grounds, founder and managing partner of Troutman Pepper eMerge and the firm's Generative AI Task Force leader, The team chatted about the risks and benefits of generative AI and guardrails your organization should consider putting in place.

Episode Transcription

Hiring to Firing Podcast – The Pros and Cons of Generative AI in the Workplace: The Matrix
Hosts: Tracey Diamond and Evan Gibbs
Guest: Alison Grounds

Morpheus:

Do you want to know what it is? The Matrix is everywhere. It is all around us. Even now in this very room. You can see it when you look out your window or when you turn on your television. You can feel it when you go to work, when you go to church, when you pay your taxes. It is the world that has been pulled over your eyes to blind you from the truth.

Neo:

What truth?

Morpheus:

That you are a slave, Neo. Like everyone else, you were born into bondage, born into a prison that you cannot smell, or taste, or touch. A prison for your mind.

Tracey Diamond:

I'm Tracey Diamond. I'm an employment attorney at Troutman Pepper and I'm here with my co-host Evan Gibbs. And together, we pretty much talk about every issue related to employment, from hiring to firing.

Well, we're thrilled to be here today at The Lodge at Torrey Pines in San Diego for our first-ever video recording in front of a live audience. So thank you so much for joining us today.

The clip that you just saw was from the iconic movie The Matrix from 1999. It's a science fiction film, obviously, where machinery has taken over the world and has decided to use humans as a battery source. In order to distract the humans from the fact that they are being used as batteries, they've plugged them into a simulated reality called The Matrix. Now, I don't know about you guys, but half the time I feel like my life is a simulation, so I can relate to this movie.

So today, we're going to be using these clips to talk about really one of the most cutting-edge issues of our time, which is generative AI, and particularly since we're employment lawyers, how it works or the pros and cons of using it in the workplace.

Evan Gibbs:

Yeah, today we're really excited to have one of our friends and one of our partners, Alison Grounds. She's the founder of eMerge, which is a wholly owned subsidiary of our law firm and they handle integrated discovery services end-to-end. So Alison, I want you to tell them just a little bit about what eMerge does?

Alison Grounds:

Sure. And thank you guys for having me. It's my first appearance on your program and I'm a big fan, so it's an honor to be here.

eMerge is a subsidiary of the firm that focuses on really any legal problem that involves the analysis of data. In particular, we were founded on the discovery process, so preserving, analyzing, and producing information in litigation or government investigations. So our team is comprised of both lawyers and technologists that help to collect and analyze information, which is why generative AI and AI is very near and dear to me, because it's one of the first areas of the law where we were using technology to be more efficient and spend less time, attorney time and dollars, analyzing information.

So really, the intersection of law and technology really hit the ground running in the eDiscovery space and continues as you see the evolution of AI and the use of AI across many industries.

So thanks for having me.

Evan Gibbs:

Yeah, thanks so much for being our guest. And in addition to her impressive background, she just described working with eMerge, she's also on our firm's Generative Ai Task Force, that is exactly what it sounds like.

And so Alison, why don't we start with some definitions? I think you've probably got, obviously, the most background knowledge in this area. So why don't you tell us and the audience about what the difference is between generative AI, which we've heard so much about lately, and other types of AI that we've been hearing about for years, because I don't know about you all, but we've been hearing about AI, at least I have, for a long time. And then suddenly, it's like the sky is falling. And suddenly, it feels like there's been a seismic shift. So why don't you tell us about it, what's happened recently?

Alison Grounds:

Sure. You're right, we've all been living with and benefiting from AI for decades. It's been helping you to select songs, and shoes, and other consumer products. It's also been used across industries. I mentioned eDiscovery, using artificial intelligence and machine learning to identify things in documents and help us predictably code and analyze them, to help with supply chain efficiencies and other things that benefit from artificial intelligence.

Generative AI is different. And the reason it's dominating your newsfeeds and you can't get away from the discussion, in both what it can do, how it does it, and who can use it. Generative AI, particularly, is meant to generate content. So it can generate text, and images, and video, and audio. It can replicate a human's voice after just hearing three seconds of it.

The one you probably heard the most about is ChatGPT, which was launched in November of 2022 and set a record for adoption with 100 million users in just two months. My philosophy as to why that occurred is because it's so user-friendly. It's trained on large data sets, using algorithms and models that help it to interact with you in a conversational and easy way. Large language models are used in lots of different formats, but this particular one is designed to be conversational and to respond to human prompts. So you can ask it a question and it gives you a pretty reliable-sounding answer.

But when I first started talking about this, I was having to explain it to people, and they didn't know what I was talking about. But now, and I realize, how many in the room have played with ChatGPT at this point, you've used it for something? Yes. So the bulk of you have it. And I always like, "After this, when we're hanging out, we need to share what you've used it for," I'm always curious.

I've heard using it to give it the ingredients in your refrigerator and see what recipes it could roll out. I heard a kid in line at a hotel to check in, saying he was using it to figure out how to win at a game, little cheating there. I had a friend on a panel with me who teaches an ethics class in the law and her students were using it to cheat on an ethics exam. So it's generating content in a conversational way.

It's just, yes. And you hear a lot now about looking for ways to detect when it's being used, that's the current trend, and using it to be a check on itself. So I would say what makes it different is its power, and its use, and its accessibility. Anyone with access to the internet can play with this and see how it works.

A lot of the AI we used in the past was controlled by, or used, or deployed by experts in a particular field, and you just saw the end result. Here, you can interact with it more closely and see how it's ending.

No pun intended. We're going to get to the end of the world later.

But the other thing I'll say about GPT and the technology is, I think that the thing you hear the most about is the fact that it's getting so powerful, so fast. So that first iteration of GPT-3.5 attempted to take the bar exam and it failed, it was in the bottom 10%. Just a few months later, GPT-4 passed the bar exam in the top 10% of bar exam test takers. That's just one example, it passed lots of other professional exams, but that's the one that gets the most interest from the press and from those of us that practice law for a living. But it shows you that exponential increase in its ability to generate content and understand concepts. And that was just in a few short months and not even trying to specifically train it to pass the bar exam, just that natural evolution as they continue to train on those data sets.

Evan Gibbs:

Wow. Well, I think this is a good point for us to play our next clip.

Morpheus:

This is the Construct. It's our loading program. We can load anything from clothing, to equipment, weapons, training simulations, anything we need.

Neo:

Right now, we're inside a computer program?

Morpheus:

Is it really so hard to believe? Your clothes are different, the plugs in your arms and head are gone, your hair has changed. Your appearance now is what we call residual self-image. It is the mental projection of your digital self.

Neo:

This isn't real.

Morpheus:

What is real? How do you define real? If you're talking about what you can feel, what you can smell, what you can taste and see, then real is simply electrical signals interpreted by your brain.

Tracey Diamond:

So what is real? It's just interesting to me that we're talking 1999 when this film came out and a lot of these issues are becoming somewhat of a reality now. And to what Alison said before, if AI can take three seconds of your voice and then replicate it so it sounds like you, where do you end and AI take over? What is real anymore?

So 100s of scientists, tech industry executives, and public figures recently issued a statement, and I quote the statement says, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

So the statement is akining generative AI to nuclear war or to the risks of nuclear war. And this is a statement that was signed by professors of Harvard, Princeton, Stanford, and industry executives at Google and Microsoft. So these are people that are not your fringe conspiracy theorists. These are people who presumably know what they're talking about.

In another letter that was signed by more than 33,000 signatories to date, it called on all AI labs to suspend or immediately pause the training of AI systems for at least six months, stating that in recent months they have seen AI labs "locked in an out-of-control race to develop and deploy ever-more powerful digital lines that no one, not even our creators, can understand, predict, or reliably control."

Alison, why are scientists so worried?

Alison Grounds:

Let's kick it off on a light note everybody. Well, it's interesting. we talk about the power, we gave a few fun examples, right? You've seen maybe the fake Drake song that AI created, the deep fakes you may have seen with politicians and/or celebrities saying and doing things that they didn't do in reality.

And I think the concern that you have here is, with the power of generative AI, it could potentially automate weapons. It has the ability to identify and exploit security vulnerabilities really quickly. You think about almost every lock we have as a virtual lock that could be unlocked with this type of technology.

It also has a power that is not necessarily controlled by its human creators. This is the idea of misalignment. We started to build this thing and we thought we knew what it could do. It's actually got the ability now to train itself. It's like, "Thanks for all that data. That was pretty helpful, but it's missing a little bit on calculus, so I'm just going to make up some calculus stuff and get better at calculus."

So that automation of itself is part of a scary piece of this, is it's getting more powerful than we can necessarily control and it may be doing things that we didn't intend it to do.

The arms race analogy is what you hear a lot of. You've got, basically, a small universe of big tech companies all competing with each other to win this race because with this power, people tend to want the power and there's the risk. And nation-states are competing. You see a lot of think tanks and organizations rushing to create AI policies. You see legislation, you see the White House, you're seeing more engagement and involvement in this. And the reason is because of the risk that I talked about. And the human extinction risk also comes from the idea that they pulled AI specialists who were working in the field, 50% of them said that they felt there was a 10% risk it could lead to human extinction. So as Tracey said, it's not just the fringe theory here, it's something that could be a real problem.

So you're seeing this focus on, how do we reign this in? How do we control it? Can we control it? Is it too late? And this request for the pause. I haven't seen any evidence of the pause. I have seen an increased discussion of viewing this as a safety problem and viewing this as something that requires a mass-concerted effort, similar to the risk presented by nuclear war, which I think the Oppenheimer film may say, it's timely for this reason, as well. It's not as on-point as Matrix, but it's a similar theory of, "What do we do with something that's so powerful and we don't really necessarily know what's going to happen? Do we just keep trying? Or do we pause and think about the implications?"

Tracey Diamond:

Well, I don't think there's a pause then, and I have to imagine there's going to be much of a pause now. What are the chances that there'll be a pause? And how much of this is headliners pulling, grabbing out these scary headlines? I think back to, in my lifetime, cell phones, internet, laptop computers, every time there's a new piece of technology, people get scared. Is this different?

Alison Grounds:

I think it's fundamentally different. And I think it's partly different because of its power in that lack of control that we have and that potential misalignment. And I'm going to definitely defer to the experts. And if the very creators of it are concerned about it, then I feel like we should probably be, too.

But it's also got some power to do some really great things. I'll give you a film, if you haven't already seen this Ted Light Talks, not an official TED Talk, from the guys that made the movie the Social Dilemma, did a TED Talk-style discussion of the AI dilemma and they just redid it at the Aspen Think Tank Conference.

So they do a better job of articulating the risk than I'm doing right now, and I think it's worth a view. But they also try to highlight some of the positives that can come from this, right? You can really solve some complex problems when you have this much power.

A few examples that I've seen at least, there's a solution to trying to preserve languages that are going extinct by using the GPT function. So having that human intervention of, let's use this language, let's have real native speakers teach the machine how to use it properly and understand the semantics of that, so that you could use in chatbots and preserve that language so you're not losing it. So there's some creative things that can do.

It can also be used, there was another application that I saw where it was being used to help people that are visually impaired understand the world around them. So not just reading the label that they can't read, but taking a photograph of the ingredients in their refrigerator and recommending recipes or things that they could make from that. So there's some interesting, good-for-the-world-type use cases and we could solve lots of big problems. But there's also, with anything that powerful, the risk we discussed.

Tracey Diamond:

So I guess the challenge is to harness the good and put up guardrails to avoid the challenges? But I think this is a good time for another clip about the end of the world.

Morpheus:

Welcome to the Desert of the Real.

We have only bits and pieces of information, but what we know for certain is that at some point in the early 21st century, all of mankind was united in celebration. We marveled at our own magnificence as we gave birth to AI.

Neo:

AI? You mean Artificial Intelligence?

Morpheus:

A singular consciousness that spawned an entire race of machines. We don't know who struck first, us or them, but we know that it was us that scorched the sky.

At the time, they were dependent on solar power and it was believed that they would be unable to survive without an energy source as abundant as the sun. Throughout human history, we have been dependent on machines to survive. Fate, it seems, is not without a sense of irony.

Tracey Diamond:

Cue the scary music.

Evan Gibbs:

Yeah, that's right. A little apocalyptic.

Well, assuming that AI, as we've been talking about, doesn't lead to the end of the human race and turning us all into power supplies, there are some real uses that we've identified for generative AI in the workplace. So Alison, let's start by talking about one of the first areas, hiring and recruiting. What are some ways that employers can use AI in that context?

Alison Grounds:

Well, I think you're right in the sense that your podcast is called Hiring to Firing, and I think you can use this technology across the gambit. But certainly in the initial phases, my thought would be, generating content, right, so you could help draft more accurate job descriptions of the candidates you're looking for. There are already, this is totally no future thing, people are using generative AI to draft better cover letters, and resumes, and try to put their name out there in the world.

But it should also simplify the process and take away repeatable tasks, right? You could interact more easily a better job matchmaking, chatbox, things that could answer questions about a position in the company more easily and the candidate. So doing some of the preliminary legwork so that you're optimizing the human interaction time. You've already narrowed down the pool of candidates to those most likely to be a good fit, and then finishing that process through the human contact, and the interview, and learning more about them.

Tracey Diamond:

So let's start with that concept though, because isn't there some concerns about bias there? I know I read an article from back in 2018, so pre-pandemic, it feels like ages ago, before all of this talk about AI was really bubbling up to the surface, where Amazon was using a machine learning tool to screen candidates for jobs. And actually the learning tool would rate the resumes with the star system of one to five stars, very similar to the way Amazon shoppers rate their own products, but they had to scrap the entire tool because they found that the tool itself became biased, because males predominantly were in the tech field, the machine learning tool, actually, when it saw a resume that had any indication that it was a female, would reject that candidate without looking at it again. And so they had to throw the whole tool out.

So again, back to that idea of guardrails. Are there guardrails in place to eliminate or to reduce bias? Or are there ways that we can use this tool without causing these negative effects?

Alison Grounds:

This is the topic of the day. The bias in AI has existed before generative AI. This is a known issue and has been well documented, including the example that you gave.

Another example of it would be in a law firm, you said, "Well, who's going to be a successful partner one day at our firm?"

And you put all the data in that said, who's been a past successful partner, you would get a certain profile, it would not be me. So I think that's the risk that's already been in artificial intelligence.

The difference, though, is you're the more powerful tools with generative AI catching that bias, right? So the way you mitigate it is to know, is there bias in the dataset? Can we look at it, and identify it, and find it? Can we train the algorithms to identify that bias so that we can at least make a conscious decision, and understand, and modify for that, take out what may be one of the misleading factors, maybe get gender out of the equation or things of that nature?

So this is, certainly, I feel, there's the debate about whether this will create or eliminate jobs. The jobs it's absolutely creating right now, are jobs for data scientists and others to look at how do we eliminate bias and how do we put in more safety guardrails, as you say, in the process? How do we train the models to continue in the direction that we intend them, and not go down a rabbit hole, and start to do things that are in misalignment from the intended purpose.

Tracey Diamond:

So is it a matter of garbage in, garbage out, if you prompt it correctly, it will give you what you need on the other end? Or is it more than that?

Alison Grounds:

No. Well that's certainly part of it, garbage in, garbage out.

My other favorite source for things about this topic, because I like the higher-level discussion, I don't really want to learn about the concentric circles of large language models in machine learning, I prefer the bigger picture. So Wired Magazine, I think, has some great pieces on this. And one of its discussions was, generative AI can generate images, as well. So you can tell it to generate an image of an astronaut on a horse and it will use massive amounts of data pre-trained with images and text to create a unique content that's never existed before.

If you go to a OpenAI's website, you'll see lots of versions of this astronaut on a horse, some more disturbing than others. So there were asking people, this is going to put you out of a job. You take photographs, or you make art for a living. And this was the same concern we had with the iPhone. I don't want to think my new one, which weighs like 500 pounds, has at least six cameras on it and can take fabulous pictures. But I still have to know how to use it. And where you're seeing, at least now, I don't know if this will be the case forever, is the prompting that you're doing to the AI, that's still where your skillset could come to play.

So I could tell it to draw a horse with an astronaut and that prompt would be generic, but if I was good at prompting and if I had an eye for vision, I would say, "Could you lighten the background a little more? Make the stars twinkle," right?

So you're prompting it and interacting with the technology in a way to bring out the best collaborative process.

So I think there is still a role to play in our interaction with the technology, whether it's from creating images or legal documents, that piece of the puzzle is still there. I don't know for how long.

So yes, I think there is a role to be played in the concept of how do we use it most beneficially, garbage in, garbage out, how do we train it, and refine it? But it is a valid concern and one that, when we think about the other question you had, the end-of-the-world-type questions.

Another question that I think we have with the garbage in garbage out is, our whole system of governance and our economy depends on us operating under a set of facts. We've challenged that in recent years, everyone seems to have their own facts that they want to feed themselves. And that's part of the theory. If you're fed what you want to hear and that becomes your reality, it makes it hard for us to solve problems because if we all don't agree on the base facts, how can we then agree on how to solve them?

With generative AI and the ability to create deepfakes of a politician saying something that they never said, that impacts elections, right? Democracy depends on understanding people's positions, and who they are, and what they will do in office. If you don't have that adequate information, that can skew the vote.

Same thing with economic factors, where you invest your money. If you don't understand what the company's really doing, if there's fake information or disinformation, it can really break a lot of the systems.

So certainly, the garbage in, garbage out, and finding that. But we're already living in a world where we disagree on basic facts. So I'm a little bit skeptical of how good we're going to be at teaching the AI when we can't teach ourselves. My father and I would be a great example of not agreeing on basic facts.

Evan Gibbs:

Yeah, I saw another really interesting concept, dovetailing to what you were saying. It was actually an article just this morning in The Times about when AI starts inventing things, whether it's some new piece of technology, some new mechanical, some kind of hardware, or something like that, who owns that, right?

It there's, so for example, if it's something that's patentable does ChatGPT, does it have the ability to own a patent? Or is it the owners of the software who will be the owners of the patent?

And so once AI gets to the point where it's really creating new things that are really useful to us, then there's a question of ownership, and how that is monetized, and who has the rights to it, how it's protected? So I think there's a whole other area that's scary.

Those works of art, for example. who owns that art? Who is able to get a copyright on something that one of these programs generate?

Alison Grounds:

Yeah, this is now. I would say that the legal issues around this abound, and I know later we'll have a discussion of privacy and security issues, but the IP issues were some of the first to hit the forefront, the first lawsuits. It's very good at generating source code, and so anybody can write source code now, because it's there to help you. So who owns that? And a lot of source code was already in the public domain, so that's a little bit more sketchy.

But these images were copyrighted images. And so it's taking those images and training its model, it's creating something new, but when you look at some of them, they're clearly inspired by other works of art and other original pieces.

Same thing with the text and information that's out there. In addition to all the text on the internet, which is potentially what's been used to train these base models, they've been able to translate podcasts and YouTube content into text to train the model. So there's a lot of content that's fed into this, and that's part of the dilemma we have.

It's a little bit of a black box, we don't really know what it's been trained on in order to be able to validate it and confirm it. And certainly, the intellectual property concerns about what it's using to create something new, and how new is it, and who owns that generated content? That is a now problem because that content is being generated currently. It's being used for press releases, and blog content, and imaging...

Tracey Diamond:

Every day.

Alison Grounds:

... every day.

So we're living in the middle of it.

Evan Gibbs:

Yeah, on that topic. So I don't know how familiar you all are with this concept, but certain jurisdictions over the last several years, especially, they've passed new laws that regulate companies ability to inquire or use an applicant's prior pay history as any component of setting their current paper a new job. So asking, for example, asking a job applicant, "What are you making at your current job? Or, "What'd you make at your last two jobs?," require them to disclose that.

Certain jurisdictions now prohibit that. You can't ask those types of questions. The theory being that, for example, women and minorities traditionally are underpaid, as compared to white males for the same job.

And so they say, "Hey, look, if you're asking for current or past salary, then you can have a real potential to perpetuate that bias, just because you're, basically, making a decision based on biased data.

So I'm curious, Alison there are, obviously, if you're querying GPT and you want to ask about sources of data and things like that, I know there's a lot of limitations on it.

So if we're using generative AI in the hiring process, let's say if we're asking for, for example, we ask it for pay data for a particular job, 'What's the average pay range for this job?," for example.

What guardrails are there, at least that you're aware of that are available now, that a user could identify and say, "Well, here's where they're getting the data. Here are some potential biases."

Alison Grounds:

Yeah. And so guardrails is our drinking word, those of you that are playing the game.

Tracey Diamond:

Oh.

Alison Grounds:

That's right.

No, it's the same problem we identified earlier, right, that we don't really know exactly what's generating the answers. It can't point to it and say, so I'll back up a little bit and say, so I've talked about ChatGPT, which is the publicly -available, you can go and play with it, version of this. There are also enterprise versions of this technology, right? So you're taking the base model that's been trained on all the information in the black box, and then plugging it into a safe place, where then you're feeding it your own data.

So it could be, for example, pay data for our law firm. So it's trained on just that information and it could answer questions about just that information. So what's the average range of a first-year associate or a videographer, that kind of information?

So you're seeing it, certainly in our industry, that the power is, we've got the baseline, but let's train it on good stuff, and clean stuff, and good data. So LexisNexis and Westlaw, which has vast databases of case law, and statutes, and information can say, "Now we can ask it a question, we can have a conversation where we can do legal research more efficiently and more accurately, because it's saying, "Stay within the confines of this dataset. Don't go off and look at that weird blog post where that guy was on a rant. Look at the case law that has been approved, embedded, and is an accurate reflection of what the law really is."

And then you have that interaction. So I think that's where you're seeing, at least in industries, and in the better use-cases are, "Let's train it on our data."

And it's where you're also seeing the ability to use this and take it to the next level to have a competitive advantage.

So those of us that are all so scared because we generate content for a living, also have an opportunity, because we have this vast set of information, we've generated content for decades as a law firm. So wherever we have good data and good information, contracts up, or construction projects over the past 20 years, or privacy policy, we can feed that into the tool, ask it questions, interact with it, and help our clients gain information from our own unique insights over the years, in a safe environment that will give good answers using good data.

Debate how long that advantage lasts before it all just goes into the general knowledge base, the goal stated from OpenAI and others is, general artificial intelligence, so this all-knowing being that just knows all the things.

So for now, there is still an advantage to training a model on your unique data to be able to get unique answers and have a competitive advantage. I don't know how long that will last and when it will all blur into the big one, all-knowing being, but for now, that's still a potential advantage and something that could help be a guardrail to limit the inaccurate answers or information that comes from the tools.

Evan Gibbs:

Just from a practical perspective, it sounds like there are vendors out there that sell their own proprietary software that you purchase or lease, and then they can help you train it on your data and it runs like that. On the ground level, is that how that works?

Alison Grounds:

Yeah. So there are vendors in different spaces, certainly in the legal space, CoCounsel is one that you've heard a lot of, I think Harvey's the other one, that are meant to be designed for a specific industry and a specific use-case.

You also have the ability, Microsoft has made a significant investment in OpenAI.

So for example, using our firm as an example., We have this task force, which is focused on the legal issues, as well as testing, and research, and development. How do we safely use these tools to be better at what we do and to help our clients in this journey? And Microsoft will allow you to use its GPT engine so you can bring it into your own Microsoft Azure Cloud Environment. And basically, you don't have to have third party software. We've got two different tools we've developed in-house, our innovation team and our eDiscovery team, that applies our own user interface on behalf of this, and then points it to the data that we want it to analyze and use. We can do that now. We're testing that now.

Google just rolled this out as well, where you can put some Google Docs in, you can use their version of this technology, and train it on just the documents you want it to look at.

So that is a current state of affairs. You've got the businesses that have already developed enterprise solutions dedicated to certain industries. And then you've got all of us trying to figure out if we can develop our own tools to do these things, as well. So it's an exciting time to be in the space and to get to see the demonstrations of all these different tools of how they're being used.

Evan Gibbs:

Tracey, are you aware of any laws that are out there currently that are in this space that regulate this kind of activity?

Tracey Diamond:

Yeah, so New York City just passed a statute that went into effect just this month that requires, in the workplace, if employers are going to use, it's called AEDT, Automated Employment Decision Tools, that they have to notify the candidates that an automated system is being used. They have to provide the candidates with an alternative means of being screened. And they have to commit to having an external auditor come in and evaluate the tools on an annual basis for evidence of bias. How all that will be put into effect, I really don't know yet, and I think there will be some guidance issued, hopefully eventually. But it's all a bit of a mystery on how that will go into effect.

But what we are seeing is that, that's the model and other states and local jurisdictions across the country have bills in the works, and are considering some form of regulation attached to the hiring process, and the use of AI in the hiring process. Also, the EEOC has come out, really telling employers, "Be careful, be conservative. We're watching that you're using this. We think that there's bias here and you shouldn't be using these tools."

But the EEOC just recently entered into a conciliation agreement with a Job Tech Board because of concerns that the Job Tech Board, the claim was that the Tech Board wasn't monitoring its client's applications for evidence, and that there was discrimination based on national origin in their client's selection of resumes. And the EEOC actually, as part of the conciliation, is requiring that Job Tech Board to use AI as a means of scrubbing the bias up against national origin.

So the EEOC is talking about it both ways. They're saying, "Be careful, there's evidence of bias here. But we're also going to use this as a tool to eliminate bias."

Really interesting and very much evolving area of the law.

Alison Grounds:

It's the same as it's going to save us or kill us, right?

Evan Gibbs:

That's right.

Tracey Diamond:

Yeah. I think this is a really good time for our next clip.

Evan Gibbs:

Yeah, I think so.

Agent Smith:

Did you know that the first Matrix was designed to be a perfect human world where none suffered, where everyone would be happy. And it was a disaster. No one would accept the program. Entire crops were lost. Some believed that we lacked the programming language to describe your perfect world, but I believe that as a species, human beings define their realities through misery and suffering. So the perfect world where you dream that your primitive cerebrum kept trying to wake up from, which is why the Matrix was redesigned to this, the peak of your civilization. And I say your civilization, because as soon as we started thinking for you, it really became our civilization, which is, of course, what this is all about. Evolution, Morpheus, evolution, like the dinosaur.

Look out that window. You had your time. The future is our world, Morpheus. The future is our time.

Tracey Diamond:

So in The Matrix, if you don't remember the movie, Keanu Reeves' character actually figures out how to master the simulation. And hopefully we'll be able to take some real-world examples from that and learn how to master the simulation that we're dealing with these days.

I do want to add one more point that we're seeing out there, which is, the concern about elimination of jobs, right? Alison alluded to that before. The writer's strike right now is in part because of concerns on the part of the Writer's Union that the use of AI will eliminate or reduce the number of jobs for writers out there. And if they're successful in that argument, I do think that other labor unions across the country will seek similar types of restrictions and bans.

So use of AI has its advantages, but it also has its disadvantages. You talked about this a little bit before about even though it may reduce some jobs, and technology always does, right, it may create other jobs, right?

Alison Grounds:

Yeah. I think the early research on this was that 80% of jobs could have 10% of their tasks impacted, and 50% of jobs could have as much as 50% of their tasks impacted. I think that's a set. And the jobs and the industry most likely to be impacted. In the past, technology was generally seen as taking away more unskilled labor, right, manufacturing, things like that. This is impacting potentially more the information economy, those of us that generate content for a living. So that's why, of course, it affects us. So it's going to get more attention, we're going to write about it all the time. So you're seeing that, and I think that is real. But you all and your businesses, and your day jobs, still know that there's a lot of stuff you waste time doing that you really don't want to do, and that it can really help eliminate, so that you're being used for your highest and best use, your strategic thinking and not task-based thinking. So I think there's that real possibility that will eliminate some of the things that we do.

We're also going to think about how we price, what we do. As lawyers, we bill by the hour, for the most part. And we already had to start addressing that question with eDiscovery, right? We're not going to bill by the hour, that's not going to motivate efficiency. We need to use technology to reduce the hours that we're spending doing something, repetitive tasks that don't really need our big warrior brains. So if we use technology efficiently, if we're good prop engineers, if we know how to use it better than our adversaries, if we know how to use our vast amount of information to give better legal advice, how do we monetize that? How do we charge for that? What do you do? That's something we're all struggling to figure out now.

And in addition to, potentially, losing some jobs or transitioning them into other jobs, it also could empower, I read something recently, it could empower middle-skilled workers, right? So you're the call center operator. I still want to talk to a human, even though you can make your voice sound like a human. And I really want a human to speak to. And that call center operator could interact with all of the information, the knowledge base of their company, with a chatbot, and they could see the answer very quickly, and they could help the customer faster and give them a better answer, than if they were just relying on their own knowledge already. So ramping up the skillset of certain workers really quickly to have access to information.

So this debate is an interesting one. Whether it will add more jobs than it takes away, and how will it all play out? And it's interesting to see. It's a little scary, as well, I think. But being on the front edge of this, being involved with it, working with our colleagues and clients, that's the approach we've taken as a law firm, at least, is, our task force is going through this journey with our clients and asking them what they're learning from the technology, how they're using it, how they're developing policies, where they see it going? And we're trying to also be visionaries, but also with appropriate guardrails.

And I did forget to mention the other phrase that you will hear more about in terms of guardrails is, reinforcement learning through human intervention. So that's the concept of, "Ooh, that's not really the direction we want to go. Let me slap your robot hand and get you going back this way."

I don't know, I'm interested to know more, and maybe some of you that study this more in depth than I do, know what prompts the AI to care about you not being satisfied with its answer. right? We know what motivates us, but the reward system that, "Good for you, you get more tokens," right?

So the motivation system and how to train it to be correct and to do what you want it to do, is going to be an interesting thing that keeps evolving.

Evan Gibbs:

Well, thanks a lot, Alison. We really appreciate you coming here with us today, being on our episode, being here with us live. Yeah, it's just been a great discussion.

If you would, make sure you please check out our other podcast episode, we've built up quite a collection. And don't forget to check out our blog, HiringtoFiring.law. And we really appreciate you folks here in the audience paying attention to us. And for all those of you listening, we really appreciate you tuning in.

Copyright, Troutman Pepper Hamilton Sanders LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman Pepper does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper. If you have any questions, please contact us at troutman.com.