Access Transcript


[00:00] Viv: 

We would like to acknowledge the traditional owners of the lands on which we record this podcast, the Gadigal people. This is their land, never seated, always sacred and pay respects to the elders past, present, and emerging of this place. Coming up on Remarkable Insights.

[00:17] Elizabeth Chandler: 

Chat, GPT can hallucinate this is where it makes up facts and it thinks that there are references to people that actually don’t exist. Quite an interesting one is ask it the last digits of pi and it will give you an answer. So you can understand then how AI actually hallucinates these facts and details, not only perpetuates any bias but also creates misinformation, which is generally very bad for society as a whole. So there is a level of gatekeeping.

[0:51] Viv: 

In this episode, we speak with self-proclaimed poet of Code Elizabeth Chandler, who is harnessing the power of AI to shape technology that fits a more inclusive, better, and brighter future. Elizabeth, thank you so much for coming and joining me for an episode of Remarkable Insights. How are you

[01:08] Elizabeth Chandler: 

I’m very well, thank you. How are you?

[01:10] Viv:

 I’m very well. Just before we jump into learning about the wonderful world that you live in we like to start our episodes by inviting guests to do a visual description of themself.

[01:21] Elizabeth: 

Sure. I am a petite person short, brown, blondish hair, brown eyes and a fringe.

[01:33] Viv: 

Thank you so much for coming in and joining me to talk all things disability and innovation. For people who don’t know what it is you do, would you mind just doing a really quick intro about who you are and how you got to this place in your life?

[01:49] Elizabeth Chandler: 

Yeah, of course. So I’m Elizabeth Chandler. I’m the CTO and founder of the Good Robot Company. My role in Good Robot is pioneering our technical solutions research, collaborations, and also managing the technical and research teams. So I actually worked in sustainability computing and utilities before going off to Good Robot and I specialize in ethical AI and that’s also what the I guess the solutions our company makes, hence Good Robot. So we really want to pioneer this accessibility and inclusivity narrative in a AI enabling it to be more equal for a variety of users using it and actually have technology that reflects a society that we live in.

[02:36] Viv: 

I understand you may only be 22 years old having achieved all of this. Is that correct?

[02:42] Elizabeth Chandler: 

Yeah. So I’m 22. I started a computer science degree when I was 16. So I think I, I got a bit of a head start really to launch pad off of and I have autism, and it was actually funny I guess when I was like, yeah I wanna start a company with this, because a lot of my friends and family were like, ”are you gonna be okay with that Because that requires a lot of social interaction”. So from then on, it’s been about managing some of these challenges, but because of my brain being able to think in those systems and be able to think in kind of causality, I was able to actually make a bias detection solution that has users a research method that hasn’t actually been used in industry up to now.

[03:29] Viv: 

You’ve obviously got this real interest in the ethics behind this sort of technology. Is that motivated by your experience of autism?

[03:38] Elizabeth Chandler: 

Yes, partly because I got frustrated at the amount of recruiting tools. They still use facial expression tests to say whether someone’s socially competent and they take this data through like the gamification test about whether someone’s scowling or angry or upset. Like understanding the nuance between it. They assumed that if you can’t do that, then you can’t actually operate within a business setting. And I was just really frustrated at that reductionism. So I asked companies how they were approaching ethical AI and they said, oh, ‘data drift’. And I was like, that’s not enough. Cause that only compares the data that you have in the existing laboratory set to the real world, but when you have bias that’s ingrained into the way that they operate and the methods that they’re using, then you are going to be excluding an entire group of individuals who aren’t able to do potentially really well at a role because of one stream of data they had. So yes, it was, there was an element of personal frustration with the system, but I think behind most like big entrepreneurial projects, there is really.

[04:51] Viv: 

And for listeners who might not know, would, do you mind elaborating on what exactly AI bias is and how that relates to your company?

[04:59] Elizabeth Chandler: 

Those that are familiar with the term AI bias will probably think of data drift and perhaps some of the case of Chat GPT misgendering people or using stereotypes for jobs. For example, gendering job roles is another one that a lot of large language models are struggling with currently. This is an example of AI bias, but we also take that to a deeper level and look at association bias. A great example is recruitment, we had a case study where those that took maternity leave actually had less equal chances of getting that job because the AI saw that as a gap in their CV rather than maternity leave, and that creates a lot of issues for individuals that are workplace returners or those that have taken paternity leave as well, that are trying to get back into the industry. Especially for like large corporations that are employing these sort of large technical models because these won’t even get to a human until way later in the process means that actually there’s no kind of human awareness or human intervention, this is even a problem potentially. And the way we approach this is by creating a counterfactual. Counterfactual is the smallest instant that has to change to the variable, or in our case, the context in order to have an overall change in the decision. For example, if you have, two individuals exactly the same profile. One of them takes maternity leave, does it change the overall outcome of the hiring decision? And if so, like why? But you can also do this the other way. So if you have two employees, one with an autism diagnosis, exact same scores, but you have as I mentioned earlier, the facial expression test. If you disregard the facial expression test, would they otherwise get the job or get through to the next round? These are questions that we’re asking to understand. If we can create technical interventions, then increase these equal opportunities and actually also make sure companies aren’t breaking the law. Because they’re inversely breaking the 2010 Equality Act in the UK so there’s a legal element to this as well.

[07:06] Viv: 

With the buzz that’s been going around with Chat GPT and the possible positives that it could bring, the disability community and the real risks, it could bring them?

[07:17] Elizabeth Chandler: 

From an academic context my best friend has ADHD and she finds it very helpful to put in the bullet points that she comes up with for a paragraph and put it into Chat GPT to actually make that paragraph for her. So in terms of initial benefits from what I’ve seen in the real world, a lot of the benefit comes with organization of thought in a way that sort of is understandable to a wider audience. For me, I can talk in too technical details sometimes, so actually Chat GPT is really helpful to make it more generalized to a bigger audience. And also a great prompt to use is explain this concept to a 10 year old. So if there is also concepts that I’m finding quite difficult with then actually by using Chat GPT I can be like, ‘oh, can I please understand this element of workplace relations’ as if you were explaining it to a 10 year old. And that’s really helpful because it allows me to understand social context. Although it’s from an AI that has probably more understanding than I do. The risks as we’ve seen with Chat GPT can hallucinate, where this is where it makes up facts and it thinks that there are references to people that actually don’t exist. Quite an interesting one is I’ll skip the last digits of pie and it will give you an answer so you can understand then how actually AI hallucinating these facts and details. Not only previlates any bias but also creates misinformation which is generally very bad for society as a whole. So there is a level of gatekeeping and the other element to be aware of is feedback loops. Now it’s a little bit unclear as to how Chat GPT is actually programmed to interpret the way that users communicate with them. But for example, if you say to Chat GPT if ‘two plus two equals four, no that’s not right, it actually equals five’. It will apologize and say that it’s relearn it as five. There is a question around whether actually users have the ability to then create additional biases are very negative to society and prevalent negative societal issues within Chat GPT and there is a sense of people tend to personify AI and robotics and with Chat GPT especially it, it can come across quite eerie and quite personally, quite offensive sometimes with what it comes out with.

[10:01] Viv: 

And do you have any examples of AI that’s used in workplaces now as an accessibility tool?

[10:10] Elizabeth Chandler: 

Quite an interesting one actually. I don’t think I can say the name of the company, but I can say the use case, which is what I’m allowed. And so they had a workplace productivity tool and this is something that could collect For example, how fast people get through their tasks, how fast they do emails, and how much time in they spend a different task throughout the day and produce like a productivity score to then go on and support their performance review. When we first came into this company, we were like, ‘what are you doing?’ Because it’s a completely, it’s so complex to try and apply figures and statistics of people across a company, very big company, very different departments. ‘Yes, you’re being productive’, ‘No, you’re not’. Especially when you are comparing stuff like accounting to marketing, right? There’s hugely different skillsets and very different thought process behind actually what needs to be done there. And one of the other things I said, you take account of reading speed, how do you take in account of dyslexia? Interesting. I was like, yes, because they’re not allowed to access employee data legally because that is against GDPR. But by not doing so, they’re also not able to really take account, or for example, where disabilities goes into the productivity software. So we used it as an opportunity then to work with them and actually create a really inclusive and accessible piece of software. So if it found that there were dips in areas like for example, reading speed, it would then adapt the scores to then support for example, any challenges they might have, but also then suggest tools in a way that’s nonjudgmental, so it doesn’t go through an individual. That individual doesn’t have to reach out for help or feel put on the spot for it. It would just say, ‘Hey, would you like a free subscription to say, read and write goal? ‘ Would this be helpful? Try it out. See if it, it makes an impact’. And by offering additional accessibility tools in the workplace in a way that doesn’t actually have a human sitting down going, ‘oh, there’s an issue with X, Y, Z, so we feel like you might need this tool even though you know you haven’t asked for it,’ that can, it can feel quite presumptuous. So by having an AI, a prompt and just ask. It can feel a lot more welcoming[12:34] Viv: And on a day-to-day basis, what role does AI play in your life?

[12:39] Elizabeth Chandler: 

I don’t know if it is coming through on this camera, but it might, is I’m actually using it AI as well right now for my camera which actually helps me because I can see on the screen when I look off to the edge, it produces a little dot that goes and moves around to direct my attention back towards where the camera is so I can try and give you a bit more eye contact. So lots of little accessibility bits and how AI plays into my life.

[13:05] Viv: 

That is so cool. I would love that camera, that camera dot! Help me, Elizabeth! We like to ask people to leave listeners with a Remarkable Insight, and that might be a piece of advice, some sort of knowledge moving forward into the future of AI perhaps. But what would you like people listening to go off and think about after this chat

[13:29] Elizabeth Chandler: 

It would be probably to look at the terms and conditions actually and just have a little read at software documentation of any technology that has a lot of your personal data involved. They have to say it legally in the software documentation, how they use your data, and a lot of individuals that aren’t aware actually where some of the data goes sometimes. So if you are a listener listening to this, Have a little peek because it will be on there somewhere hopefully on the website of tech that you are using. 

[13:05] Viv: 

Follow The Good Robot Company on LinkedIn and for more updates on what is happening with Remarkable follow our socials! ‍

Other episodes you might be interested in

June 26, 2024
Read More
May 29, 2024
Read More
May 15, 2024
Read More
September 6, 2023
Read More

Supporting Partners