Increasing AI Literacy

Jordan Harrod (AI Educator & PhD Candidate, Harvard-MIT) researches how neurotechnology and machine learning can help develop new tools for brain stimulation. One of her passions is recognizing how we interact with AI on a daily basis and finding where AI falls short of human cognition, which is the basis of her GenAI talk.

About this session

Jordan Harrod briefs the audience on how to increase AI literacy.
Osama Zahid
Jordan Harrod
AI Educator & PhD Candidate
,
Harvard & MIT

Our next session is with Jordan Herod around increasing literacy. Jordan is here with us is a PhD candidate from Harvard at MIT Health Sciences and Technology program, where he researches how neurotechnology and machine learning can help develop new tools for brain stimulation. One of her passions is recognizing how we interact with AI on a daily basis and finding where I fall short of human cognition. She hosted a YouTube channel where you can learn all about AI and medicine, algorithmic bias and fairness. AI in the workplace. And so much more. Let's welcome Jordan to the stage. Much for that lovely intro, and you'll have to forgive me, but I will have notes on my iPod for this because I just have so many numbers and statistics that I would like for every single one of you to remember and be quizzed on during happy hour, which I'm sure you'll all enjoy as everyone enjoys a pop quiz. So, as Mr mentioned, I'm Jordan. I'm a PhD candidate at Harvard and MIT. And about three years ago, I gave a Ted talk on AI literacy at Ted speak in St I. In creating and conceptualizing this talk, I wanted to look back on that one and how this field has changed over the past three years. And it's been wild and interesting to do so. Ooh, now we got the right slides. Wonderful let's see if I can fix this. OK now. Well, we're just going to go with what we have now. Do we have it? We're just going to go with what we have. So as I was looking through the last three years of our development, something that struck me was how that exponential growth has changed. Three years ago, we were so much more on that flat curve of development. There were new language models coming out every six to eight months instead of every six to eight hours. They were still in their infancy. We were looking at human, realistic, synthetic media that looked more like what we see on the left, something that we could argue is human, realistic, but isn't exactly the greatest representation of humans, as we can see by that bottom middle photo, because I don't know what's going on with that guy's nose. And we can also see just how much the field has evolved when it comes to things like deepfakes and in particular generative media, which is the reason why you're all presumably here. And at the time for that talk, I defined AI literacy as the knowledge that a person accumulates in order to be able to confidently understand and interact with AI systems. So just as in computer literacy, we all or I don't know, I had to take computer literacy classes when I was in elementary and middle school where we learned how to type and learned how to hit certain word counts because that was a skill that I use just as much as cursive these days. And in those computer literacy classes, we're not necessarily as focused on the inner workings of the computer, how the algorithms work, how the operating system works. We're we're more interested in how these systems can be used in our daily lives and how we're going to have to. And so just as in computer literacy, when I thought about AI literacy, I was thinking about how these algorithms were going to affect our daily lives and how we could use them for our own benefit. Now, in looking back at that talk, I would say that roughly 95% of it is out of date. So when I gave it, most of my audience hadn't really heard of AI. In fact, if you look at Google Trends searches from 2019 around artificial intelligence, the main questions are AI robots. Question mark what is AI and high definition? We're all very familiar with AI at this point, whether you are a layperson or someone in the field. And we've seen advances in generative systems from images to text to video. We've heard a lot about full self-driving to the point where some of us might be getting a little bit sick of it, which is why I don't spend as much time on Twitter anymore as I used to. But because of that, it takes so much more to be AI literate in 2023, and that's both in terms of the amount of information that we now have to Wade our way through. But also in terms of, in my opinion, the nuances of the information that has arisen as AI evolves. Having said that, I think that there's a central theme from that talk that still holds true today, and that is that even though we're pretty solidly in the mark when it comes to AI at this point, a lot of us still don't actually understand how it works. In fact, we don't always know it when we see it or in this case, when we hear it. So anyone who has come across me online, that is not what you're supposed to be seeing right now, probably knows, which I think is probably about six of you who've come across me online probably knows that I have run my entire life out of a program called notion for the past three to four years and late last year, notion released a better option called notion AI. So raise your hand if you knew that I used notion AI to draft everything I've said to you so far. Don't lie. I saw one hand, one person on the front. That's about it. You would be surprised. Yes so I asked notions. I generate a system to write the opening for a talk for business audiences on why it is important to understand how AI works and used it effectively with a few tweaks to begin connecting with you today. Do you think I should have told you that up front? Do you think you should have been able to guess that was the case without me telling you in the first place? Do you think your customers would want to know if that is what they were engaging with that was coming under your name, given that you're all here? I assume that you would agree that has become a major player in the business world these days, from automating mundane tasks to helping us make more informed decisions, to serving me ridiculously tailored Instagram ads that definitely tap into my ADHD and make me buy things on a daily basis. But the use of these systems comes with a responsibility to ensure that we're using it ethically and responsibly. And so that's where I think AI literacy comes in. And I do I do still stand by the definition that I used in that 2019 talk. I think that the main tweak that I would make these days is that I would swap, interact with leverage. And the reason for that is because when I gave that talk in 2019, I think that a lot of the interactions that we had with AI were much more one sided. But as they have become widespread over the past several years, I think it's important that AI literacy also empowers people to use AI systems to their own advantage instead of being passive users of these systems. Now, it's quite easy for me to stand on the stage and tell you that literacy is very important, but it's a lot harder to go out into the real world and learn it yourself and to make sure that everyone on your team or your company are sufficiently AI literate for your needs. I mean, what does that even mean? Do you know? So I came up with a bit of a toolbox of sorts that I'm hoping you and your teams can use when it comes to AI literacy within your companies. And so first off, of course, to understand how to use AI responsibility to become AI literate, we need to understand the basics of AI. That includes what it was designed to do, what it actually does, and what it has been trained on. And so I think an interesting example that might be relevant to many of you is language models. I mean, that's why we're all here. I think I've come across many language models that might help me write my newsletter. I've been trying to revamp it for the last year. I'm not very consistent with it, but I think that I tool would be really helpful for helping me connect more with my audiences and my customers. So I find it all. I end up using it to outline my newsletter and very quickly I find that my customers, my audience, they're not necessarily resonating with the message that I'm sending and I'm not entirely sure why. So I dig more into this language model and I find out that it has been trained on data that comes from a completely different industry than mine. So in not looking through this newsletter and not looking through the model that is creating it, I'm sending out a newsletter that might be alienating customers. That certainly isn't getting me any new ones, and I might have to do even more work in order to fix it than I would have otherwise. My slides are magically changing without me doing anything. It's AI. Oh, OK. I think we're back on track now. And so that's one of the many places where we can see. The considerations that come into air literacy, the things that you might need to know or to learn or to look into before you decide to use a product come into play. And there are a few other examples that I think are particularly interesting. One is just that and I made a video on this that I'm sure none of you have seen. I've always been interested in how bias can enter algorithms through the entire pipeline. And one of the things that I think that people don't think about a lot is that bias often presents itself before you've even touched a computer, before you've made a model, before you've collected any sort of data. It presents itself when you're figuring out the problem that you'd like to solve. So a fine example, maybe not a fine example. It wasn't fun for the people, but an interesting example certainly is a really interesting paper, is that there was a, I believe, a Nature paper that came out a few years ago that found that patients were being deemed high risk in disproportionate ways. So patients undergoing long term medical treatment were being fed, their data was being Fed into this algorithm that would predict whether or not they are high risk patients. This would prioritize whether or not they got higher standards of care. And it turns out that the question that those researchers were interested in asking was essentially what makes these patients more prone to spend more money on health care? Because that would mean that you are sicker. And that would mean that you need a higher standard of care. However, it turns out that people of color who are equally sick spend less money on health care. And there are a lot of reasons for that. There are a lot of cultural, societal reasons for that. There are a lot of socioeconomic reasons for that. But what ended up happening is that people of color did not get access to those higher standards of care, even though they were equivalently sick to their white counterparts. And so that's just an example of a setting where it wasn't the model necessarily. It wasn't the data necessarily that ended up causing the problem. It was the conception of the problem itself. Another fine example, which I believe has actually come up many times in this conference, is things like generative art. I did a video on this recently and I'm sure we've all been following the Twitter feuds around who can use generative art and what it means to use generative art and everything along those lines. But I don't know that people realized the example, usually being Lindsay, I realized how much of a mess they would find themselves in when they decided to sell a product that used generative art pipelines. And so in that case, there were many, many underlying complications that came up. A lot of it just has to do with the legal status of the materials that you're using. The underlying data set was comprised of links to images. Therefore, it is not actually violating any copyright. The model itself does not technically contain the images, but it is a transformative use of them. And these are the types of issues that you can run into if you don't understand both how these models work, but also how they fit into the broader context of society. So another good example of this is actually a GitHub copilot algorithm. So Microsoft, OpenAI and GitHub are currently involved in a class action lawsuit because developers from developers think that the fact that these companies basically scraped GitHub of all of their code to use for training data without regard for the licenses involved is algorithmic piracy. That lawsuit has not yet resolved, but I would also imagine that was not something that they had considered when they were necessarily making it. And then another fun recent one is chat, CBT and Bing. What does it mean to involve a generative model in a search engine where in theory, you would like to optimize for factual information? You want to give people the correct answer to their search. You want to give them information that is accurate. And in fairness, Google search has had its own problems with highlighting false information. But now that you've incorporated this new model and there is the liability on the model, is it on your company? Is it on someone else? It certainly creates a bit of a conundrum for anyone who's trying to use these types of services. And then the last thing that I think when it comes to literacy, especially in the business space, is very important, is actually making sure you have a concept, at least of what AI means. So I fell down a rabbit hole on this topic probably about a year or two ago. And the thing that prompted it was a conversation with a friend in which I brought up an example of something that I consider to be artificial intelligence, and they disagreed with my definition. And so I would actually challenge all of you to come up with your own definitions. Now, later, at some point and then during Happy Hour or during the intermissions, or as you chat with people at the conference. Compare them. Do you have the same definition of AI do you all agree that certain things should be considered AI and certain things shouldn't. Especially over the last few years. So I it was the term itself was concept ID as an effort to develop systems that resemble human intelligence. This was in, I believe, the 1950s. And so at the time, they were using logic based algorithms to do so. And I don't think that in a lot of cases, we would consider that to be AI now these days. I can mean anything from a language model to I mean, these days it honestly mostly means a language model in the public eye, but it can mean anything from a language model to something that you see in a movie that is artificially intelligent and sentient and of itself to honestly, a bunch of human workers doing work that is being marketed as AI because the term AI is very sexy these days. So all of this is to say that when it comes to AI literacy, a lot of the times we're not actually agreeing on what we're talking about. And I think it's important when it comes to both creating and using products, but also leveraging these tools for your businesses, that everyone is on the same page about what it means to use AI and what these tools do for you in the first place. So I can see you all shifting uncomfortably in your seats about all the information that I just dumped on you. And I totally get that. When faced with the totality of what can be considered air literacy, I think it's understandable to get overwhelmed at the prospect of learning these things yourselves. I mean, I'm doing a PhD and I still get overwhelmed by the field on a regular basis. Especially just for yourself, let alone teaching your entire team or your entire company. And so while my goal is to give you an overview of what it means to be a literate, what that means in the context of business, my goal is to also let you know that, hey, you don't have to do it all at once. And b, there are ways of incorporating this into your workflows more easily. So that can be anything like having meetings, seminars, sending people to conferences like this where you can dispel misunderstandings and Metz while providing accurate and up to date information about how to use AI. And you can also consult with external people. So there's plenty of researchers who would be happy to come in and talk to you about how AI works and how it might work well for your company. I am not necessarily one of them. That is not an advertisement, but there are plenty of them out there, trust me. And look what's not around now. OK I think the other thing that comes to mind when it comes to literacy, especially in the modern era, is that we have all of this information. And we have all these tools that we can use that might be able to turn our audience growth or reach thousands more customers. But when I look at these tools, something that I think comes to mind for me and I found comes to mind for a lot of other people, is, what about me? Like, where do I, the human being, fit into this system? And I think that there are obvious answers to this. There are answers like you are the user, you are the person who is creating the generated text, you are inserting the prompt, you are the developer, you are creating the model itself. You are on the research team helping to conceptualize the problem. But I think that one that is really important as we continue to use AI is well, actually, let's jump back to the slide that this is actually supposed to be on and see if it works. Awesome I think that this was a really interesting example of uses of AI and why people are still very important for it. So there are plenty of examples of this. I'm pulling this particular one from Twitter. An interesting example of someone trying to use GPT as r.g. 53 to provide mental health support for people online. So mental health resources are often very hard to access. If we can create tools that can give people better access to therapy or to other mental health resources, that would be amazing. And it turned out that people, once they learned that these messages were co-created by a machine, so there were people involved in this process, didn't like it. They felt that the simulated empty felt empty. They couldn't connect to these systems. They didn't feel like they were getting the therapeutic effect or the mental health support they actually needed. They felt more alienated than they were before. And I think that is. The key for me. There are other examples of this, things like avatars that you can use to form bonds with on your phone and things like that. But I think that at the end of the day, when it comes to that question of where we fit into this equation, the answer is just human connection. So I think that this is where we all come in to the AI sphere. I think that whether it be creating marketing newsletters or creating images for your client, that they might not be able to find anywhere else. The human factor is incredibly important. And I also think that in an era where I can draft a talk for all of you using AI in about 10 seconds, and I can make a deepfake of myself giving the talk which I did for an educational YouTuber named Tom Scott. I create a deepfake of him for less than $1,000. It looked a little bit clunky, but fairly believable, surprisingly so. I think that having that ability to connect as people is important because we use connections to make decisions. We use connections to make decisions on where to go to lunch next or whether or not we want to go to this conference or whether or not we want to purchase a particular product. And so I alone can't provide these connections. It needs us as human beings to do so. And so. In a more practical sense, many of you are probably tasked with coming up with concepts and leveraging AI tools to expand on them, but also interrogating these tools and these outfits as it relates to bias and fairness and accountability and things like that. And so in working on this talk, one of the main ideas that surfaced was that AI is here. We're not waiting for AI to come. It's here. It's been here. And so how can we best educate our teams and ourselves to do exactly that, to come up with these concepts and leverage these tools and interrogate them as it relates to the things that we want to use them for so that we can make sure that we're headed in the right direction. And I don't actually think that's quite the right question. So certainly we should be taking steps towards AI literacy with our teams and with ourselves to make sure that we all understand the technologies that we're working with. But I also think that when it comes to the right direction, that's something that we have to define for ourselves. That's something that we define when we create our companies. That's something that we define when we create our projects. We don't use AI to define the right direction. We use AI literacy to tell whether or not the tools that we have access to are getting us there. And that is what I love about AI, the fact that it can help me get to the right direction, but I can't tell you what the right direction is. That's just going to be up to you. Thank you.

GenAI Conference

Hosted by Jasper
7
Sessions
5
hours
hours