Subscribe
& more

Episode 75

Context And The True "Cost" Of AI

AI 101

Episode 47

Legacies | Hardy Hardware

 

legacies hero art

Show Notes

Sure, AI has made a splash. And it's on us to level up, learn the ropes, and roll with it. But how do we even do that? And what cool human stuff might we accidentally ditch along the way?

The Compiler team ends the season discussing the importance of context, creativity, and applied knowledge.

Transcript

This season we've talked a lot about AI. It's not a surprise since it's in our news feeds every day. It feels like that's all anyone is talking about, and with so many developments so quickly, it's hard to get a sense of where things are now, much less where they're going. One thing is clear: we can't put the genie back in the bottle. AI isn't going anywhere anytime soon. It's up to us to catch up, learn, and adapt. But how do we do all of that? And what do we risk losing in the process? This is Compiler, an original podcast from Red Hat. I'm Kim Huang. I'm Johan Philippine. And I'm Angela Andrews. On this show, we go beyond the buzzwords and jargon and simplify tech topics. We're figuring out how people are working artificial intelligence into their lives. Today we are wrapping up our season on AI. Let's get into it. My first interaction with AI. Wow, that's a really good question because it's been going so fast. It kind of blurs, right? This is Marc Mangus. He's a healthcare strategist at Red Hat. He specializes in figuring out how IT solutions can help augment or even transform companies in the healthcare industry. Here he recalls an AI-centric project he worked on in the healthcare sector. And they had an idea about how they could get AI to read radiographic images and alleviate some of the pressure on doctors, especially in emergency departments, where they have hundreds or thousands of images to read every day. Even though this was a few years ago, the concept can still be considered pretty novel and groundbreaking. It would actually read the image, create a summary, and an advice paragraph for the doctor. So by the time the actual clinician looked at it, it would call out the things that it thought made sense to look at. What they found was that it was a significant time saving because of the fast track workflow, if you will, where it was confident. The doctor was just checking its results. Was it right? Was it not right? Was there anything else that it missed? Things like that. Sounds great, right? And this was five years ago, right? Give or take. So what's changed? Marc says the ground we've covered since then has to do more with recognizing relationships versus patterns. So where we came from with machine learning was really around predictive use cases, right? Where we're trying to get tons of data exposed to the algorithm in a very structured way with tagging and other techniques, so that it can recognize patterns and predict whether, given a new set of conditions, it can see that same pattern again. With the advent of ChatGPT, 18 months ago or so, it's really much more about knowledge retrieval and contextualization. So, Kim, could you explain that for me a little bit? What exactly is the difference between recognizing a relationship versus recognizing a pattern? I think I might need a little bit more there. Sure. Okay. So the way that LLMs work in the current sense is finding like and like and matching them together to build content for humans to consume, right? So something like ChatGPT recognizes speech patterns, words that commonly appear together, and it chains them together. And that's how it can write novels and such. Yeah. But it can't necessarily form context very easily. Like now there's a shift in AI from pattern recognition, something that, like, in the use case that Marc is talking about, and now there's more along the lines of figuring out how things relate to each other. And that contextualization is the edge piece. It's actually giving the result to a human in a way that makes sense to the human, in a context that makes sense to the human. There is a reason why so many of our episodes this season talk about context. A lot of our really lofty goals around what AI can do hinges on context. So how does an LLM get there? It all starts with the data, specifically improving an LLM's performance through the implementation of certain frameworks. Marc explains how this could work with bits of data stored in a hierarchy, or in a way where those bits' relationship to each other is known. So, for example, a man and a woman are related to each other, and they're both related to a concept called human. So normally an LLM wouldn't know any of that. It just takes whatever it can find in whatever context it can find and says, this is the truth. Right? But when you introduce this concept of a graph database with these vector indexes that have all these adjacency values stored in them, it gets really smart, seemingly. So you establish something of an informational hierarchy. And no matter how much data goes into the model following that, you will find elements of that hierarchy, those relationships in the answers you get. All of that is done without having to fine-tune or retrain the model, which can be obviously a very costly endeavor. All right. Now checking for... check on learning, as they say in the military. Check on learning. Angela. Johan. Are you following me so far? I think so. Okay. Yeah. I have an example. It might make it a little easier if you want to hear it. Yeah. Yeah. Bring it. Okay. So let's say you are shopping for home décor or shopping for curtains, right? You go to a home goods website. This is very kind of personal because I'm doing that right now. Shopping for curtains, and you find a really nice set of curtains that you like, and you put it into your cart. Sometimes a website will present you with a curtain rod because you can't hang curtains without that curtain rod. Now, curtain rod, as a concept, and curtain as a concept, they are two different products. They don't come together; they're not the same. But we have to tell a system that those two things are related. You can't hang curtains without a curtain rod. So if a person is shopping for one, there's a high possibility or a chance that they're shopping for the other. So what developers do, friend developers who work on websites, they create these informational architectures or taxonomies on the website to tell the website that these two concepts are related so that it knows to present this recommendation to you. That's kind of what Marc is saying here. It has to be kind of spelled out for the LLM because remember, computers don't really know context. So we have to teach them context. So there's a human involved making decisions on where to make relationships. Absolutely. See, there has to be a human somewhere when AI is around, right? Yes. Yes. We have to be a part of the process, to at the very least, create contextual information and information hierarchy so that when those answers come back, we don't have to refine every single time when we're asking for something or we're prompting for something. It can learn and understand concepts, but we have to tell it the concept first. It doesn't just understand it automatically. That seems like a really big task for a large language model. Hmm, it could be. Not all those different relationships. I mean, there's just got to be so many possibilities out there. Especially in healthcare. Yeah. I could see that. Yeah. Speaking of healthcare, let's go back to Marc because he works in healthcare. And in the medical space, which is my realm, the promise is to actually have something that can give you a clinically relevant answer when you ask a question. So for example, one of the big use cases that a team of us at Red Hat are working on is around prior authorization reviews and approvals. Important work and much needed because there can be backlogs for those reviews and not enough human eyes to tackle them. Also, there's a little bit of compliance issues there because there is a certain amount of time that needs to be honored when people put in these prior authorizations. It's very time-sensitive, and if companies fall behind or they miss the mark on those kinds of time-sensitive review processes, they can face some kind of punitive action. So it's very important that they make those timelines. But there are so many requests coming in that sometimes they get overwhelmed. Plus, there are so many different data points in a person's health history. You have illnesses, allergies, blood type, location, age, weight, past medical procedures—the list goes on and on. And are those data points stored in the same place? Of course not. No. They're not. I laugh, but it's not funny. There's a lot of pressure on these teams, a lot of moving parts, and there's a lot that can go wrong. And if it's not done quickly, it can put the patient's outcome and their health at risk. Everybody in the system knows this on the payer side, on the provider side. They all know it's a problem. They're all working on it. But where generative AI comes in to kind of disrupt what's happening is... it's very good at looking at vast amounts of data and returning specific results. So if you tune the data correctly, you can get really good answers. But that's the trick because in a clinical environment, hallucinations are not acceptable. That's right. Yeah. It's really intense. Very little room for error. But Marc says in some use cases there's a lot of potential for advancements when using AI and improved experiences for patients. And the cool thing is, when we're experimenting with this, it gives us back information we didn't know we needed because it knows how all the things are related. So we might ask a very simple question, but it gives us a paragraph or two of information, and maybe that second paragraph is a bunch of stuff. Oh, it goes back into the patient's history and gives me more context. That's very cool. Yeah. Not that I thought he needed to say it, but Marc thinks that this is a huge, huge step for, I mean, the world, for society, for global health. Yeah. I mean, I don't know about you two, but last time I went to see my doctor, you know, a very short amount of time that I'm spending there, face to face with the doctor, it's very easy to miss any sort of patient history or things that might have happened in the past. Things that I wouldn't necessarily know to bring up to... that would be relevant to the situation. But if this AI model can really help and bring everything that's relevant from all of my records and just have that ready and available before we even start the appointment, that sounds like it'd be a pretty big help. Yeah. It can also help when it comes to coding properly. Right? So it can reduce the amount of errors on the back end when people are trying to apply for these really expensive, really time-consuming procedures that are very sensitive to their health outcome. You know, a lot of times it's just a simple issue of billing or coding incorrectly. AI can help because, again, it has the ability to look at a lot of data very quickly. It can help reduce the errors on the back end as well, and people can get the care that they need, which is huge. Yeah. Well, I'm hopeful. I'm in the middle of something like this right now, and I don't know if AI is involved, but boy, is it very frustrating. So you're right. Having all of the information, your complete history, and taking that into account should prove pretty helpful. Yeah. Score one for AI. Yes. Marc obviously feels the same way about this work. This is a game changer. In healthcare, professionals have a hard time asking the right questions because they don't know what data sources they need to inquire about. They also struggle with how to interrogate each source differently, as the technology can vary widely. My chat with Marc left me really excited for the future, but I wanted to know how the newest crop of IT professionals were thinking about that future. And that's a mixed bag. I myself am not a very English-major kind of person. I would not describe myself as a writer. However, I think that just copying something off of a language model's output is making us think less. I say that all the time. I am on my high horse about this, where we're not learning things the way that we used to. We're not retaining information the way we used to, and our brains are changing. Can I stop you really quick, Angela? I have a whole section for you to discuss this. Please hold. I'm going to kick it to the break, and then we're going... because I've built that out especially for you. I knew you were going to feel this way, so I got you. I'm holding. This and more depressing news after the break. We heard from Marc Mangus about how far AI has come in a very short amount of time and where it stands to go in the not-so-distant future. I'm going to introduce a younger Red Hatter who's charting a path in AI for her own teams. My name is Aakanksha Duggal. I'm a senior data scientist at Red Hat. I've been here for almost five years now. Aakanksha works with Instruct Lab, an open-source project for building LLMs. She's only been here for a few years, but I spoke to her to get a sense of her thoughts on younger technologists adopting AI with the goal of getting a job in tech. Generally, people are using AI for all sorts of things, right? A lot of models and tools have come out for writing and creative work, for example, and Aakanksha has thoughts and opinions about that. When you read a novel, every writer has their own style of writing; they bring something unique to the table, and that's where humans bring their creativity in. I think that aspect will be diminished because there will be so much artificial data around. Even if you're trying to use ChatGPT to rewrite your emails or something. Johan, our moment has come. This is it. You and I have thoughts. We are technologists, to be sure. We work at Red Hat, but we're also creatives; we've been creatives for the majority of our careers. What do you think about what Aakanksha is saying here? I agree 100% with the sentiment that if we rely too much on AI to do our writing for us, we will lose originality, creativity, and new voices coming out and speaking individually. The AI models can't think or create something original themselves. They take the data given to them, mix it around a little bit, and then spit something out that is different from what they've ingested but still very much resembles the inputs. It's rare for them to create something new from scratch. Yeah, and that contextual and conceptual work is very much in the human realm. It's not something that is easily taught to an LLM, if at all. Figuring out relationships is one thing, but for me, it feels like the problem-solving aspects—much of the writing I do every day is not just creative; it involves a lot of emotion behind it. It's about solving problems and meeting challenges, and those are very hard for AI to reproduce or to create something completely new and original. It's impossible for it to do. I think that while AI tooling is impressive, over-reliance on it can be detrimental to creative people. I think it's a detriment to anyone who creates, period. As a technologist, I create all the time. I'm always writing wiki pages, scripts, and documentation, and it's easy to lose sight of the fact that the body of work I've built up over the years has context. When you're just phoning it in and letting generative AI take over, it's dumbing down your creativity. It's almost like phoning it in. Unless there's a time crunch, I don't want to do that, and I really don't want to be represented by the cacophony that comes through an LLM. When you read stuff that's been through ChatGPT, you can tell it sounds weird because it feels like it's just cutting and pasting a bunch of information together. So, all of you technologists who are creative, and those who are not technologists but still creative, be wary. You don't want this technology to take your shine. I'm scared right now because you can tell the difference. Maybe it'll get better, but do we want it to? I don't know. Yeah, and there's something along those same lines: creativity is a muscle. It doesn't just come to people. To different degrees, you can argue that, but the more you exercise that muscle, the more creative you become and the more you're able to create something novel. If you start offloading that to a machine, you're also decreasing your own ability to stay creative in the long run. So for those studying computer science, they are the ones who are studying AI and will become the leaders of AI in the future. Surely they know the risks of over-dependency on AI, right? Surely. I honestly feel bad for the next generation of students because it's impacting their thinking abilities. They do not think as much with internet access at all points. Students are doing assignments using ChatGPT at this point, which is one of my concerns. I also teach a class at Boston University, and I've seen students copying a lot of things from ChatGPT. Angela, now is the time. Here we are. Gloom and doom. I feel that Aakanksha is 100% right. We are losing something. When we became so reliant on the internet and generative AI, we lost the ability to think deeply. Gone are the days when philosophers had deep understandings of various subjects and could quote poetry from memory. They could pull out wonderful threads of conversation based on their knowledge. Fast forward to today, I don't know how many of us are that deep into our studies. How deep are we into our craft? If we are writers, can we quote our favorite authors? Am I quoting someone's Ansible playbook? Probably not. What I'm saying is that we need to be able to recall information from our minds instead of taking the easy way out by Googling or using ChatGPT. We should take a moment to think and learn something. This technology has swept us off our feet, and we're asked to do more with less, which leads us to cut corners and rush things. There's no clear answer as to whether we're moving in the right direction or if we should continue on this journey. I'm reading a book right now that touches on this subject, and I'm trying to retrain myself so that my first instinct isn't to look something up online. I want to think about it and ruminate on it because it's in there. Some of that short-term memory has converted into long-term memory, and we should be able to pull from it. Yes, this is my moment right here. I can talk about this all day. I read about it, I think about it, and it's something we should all consider. When you said that just now, Angela, it made me realize that there is a pressure for us to be two things: correct at all times and fast. No matter what your job is or what area you work in, there's pressure to be fast and right all the time. I feel like younger students coming out of CS programs and entering the tech industry face a lot of pressure to be both accurate and quick. How do you ensure that you are fast and accurate? You look it up online and rely on information that's already been vetted. You present them with things they already know, which is the fastest way to gain understanding or, in their case, gain knowledge from others. I think this pressure is tied to a larger sense of urgency to be those two things. Everything is accelerated with AI because now you don't have to create the thing; you can generate it, which removes that extra step. It's interesting to consider how this will affect learning and deep knowledge moving forward. Stay tuned. Yeah. Absolutely. To be clear, Aakanksha doesn't feel like AI should be ruled out as a tool for study, but she does think its uses need to be more intentional. As somebody who's concerned, it's great that you're copying code, but are you learning something from it? My idea is that as long as we're learning something from it and not losing our identity, knowing what the code does and adding some element from our side, we should be good. If you just copied it, that's not enough. Yeah, I've worked with students before, many years ago now, and there was this element of "it's just school; I'm going to pass this class." But I think what Aakanksha is saying is really important. You need to be learning how to do the work while you're in school. You need to keep flexing that muscle while you're in school and in the workforce; otherwise, you're going to end up in a heap of trouble real quick. Yeah. Yeah, don't wait until you get that diploma in your hand to... Lock in, you know, like definitely lock in as soon as you can. And I think that learning how to learn—there's something to be said about that as well. Because, you know, how many of us go for certifications after we get out of college and learn new languages, new programming languages, new things outside of the school setting? It's teaching you how to build your own frameworks that you can learn on your own too and study on your own. So if you skip that step, it just becomes even harder once you get out of a formal environment or a structured learning environment to acquire new skills. So yeah, agreed on all points. And obviously, copy and paste does not fly when you get into your first job. You could cause companies multi-million dollar lawsuits if you're caught doing stuff like that. So I would not recommend copying code from unreliable sources. Yes. Or reliable sources for that matter, right? Yeah, that's probably not good either. See our episode on copyright. Probably not the best approach either. So, to prompt or not to prompt? I asked Akanksha what she thinks is the most important thing when you're talking about using AI. She says it's a balancing act to ensure the humanity, creativity, and innovation don't get forgotten while we're all racing to the future in AI. You have to move with the technology. So you have to ensure that the creativity stays but still utilize AI to move faster, is how I would put it. I want to bring back Marc as well because he had some good parting words too. And so there's a new literacy on the horizon: people that understand prompt engineering. They can ask questions of LLMs, AI in general. They know what the answer means when they get it and what the context and limitations are. That's the new literacy. I thought that was really profound. It is. It really is. To be able to decipher and, one, ask the right questions and two, understand what's being returned to you. Like, you have to understand: is this what I'm asking for? Is this valid? Like, I think we talked about this a couple of episodes ago. Understand that you have to trust but verify. Again, that always comes back up. But that's a superpower nowadays—to be able to understand what's being returned. And you know, not taking everything verbatim but understanding, yes, this is a great answer. This is a great response to the prompt that I just entered. And it gives me the context on which to build. Or if you're not getting exactly the answers that you're expecting or that you want, knowing how to tweak that prompt and say, like, "Hey, no, this is actually what I'm looking for," and hopefully you'll be able to get the answer that you're looking for. Yeah. To me, it's something else that you said earlier in the season, Angela, about being more than just a consumer of the technology. I feel it's really easy to fall into the habit of being a consumer of a technology, even if you're a technologist, versus a person that kind of understands the intricacies of how it works. People are really afraid of what LLMs can do and the impact on society and losing jobs and all that kind of stuff. Some of that, you know, is probably valid, but a lot of it isn't. And it's just based on a lack of understanding of where the technology is and what it's capable of. Understand what that tool can do. Ask people that know. Ask specific questions. You know, that's the best remedy for fear that I know of: more information. All right. So we're at the end of the episode. I wanted to kind of go around, since this is the end of our season, and ask one by one what each one of us is taking away from all of this. Angela, do you want to start us off? Sure. One of my biggest takeaways is the fact that AI is here. It is not something that we watch in movies. It's not something that we read in books. It is very tangible in our daily lives. And I think as a society, learning to accept that to some degree—I know that there are people who are thinking AI is going to take my job. And Marc said it really well: the best remedy for that fear and that angst is learning more about it. And I think if we arm ourselves with information and become familiar with what we're hearing on the news and in social media, learn about it, you know, give it... Don't let the fear, don't give the fear power, right? I think that's what I'm learning here. And also that there are so many great projects that are being created with AI and open-source tooling that the sky is the limit. I mean, the Iron Man suit—I'm still on that. I am so impressed by that and other things. But this time is ripe, and this is a great time for us to grow as a society and embrace this technology in a way that keeps you more informed and less filled with fear. I love that. Johan, what about you? So I came into this season not having a lot of hands-on experience with AI, and so a lot of what I knew was based on what I was reading online and hearing from people. And the more we developed this season, the more some of those things started to fall away as unrealistic. AI isn't quite there yet, and there's a lot of hype, and simultaneously, I'm also very impressed by some of these projects that we are seeing. Like, again, the Iron Man suit. Now, what I would say is, again, exactly what Angela was just telling us to do, which was arm yourself with the information that you need to truly understand what is here versus what is hype and what is fear. If you just go by what you hear from other people, then you're not going to come away with a very good understanding of what you can actually do and what can actually be done with AI. Yeah, definitely. For me, I came into this season expecting to be kind of blown away. And I was. A lot of the projects and things that I learned about were incredible, groundbreaking. Some of the work we talked about in this episode with Marc—it's really important work too, and it's needed. There are uses for AI that I think we can all agree are needed because the current ways are just not working. They're breaking down, or they’re causing a lot of undue stress and pressure on people. But at the same time, there's a lot of fear around AI, and I want to respect people who feel that fear for a lot of different reasons. And I would say that the people I talk to definitely assuaged my fears, but then replaced those fears with new ones. Yeah. I think that's the perfect encapsulation of doing the research for this season. Yeah, definitely. A feeling of like, you know, I'm not so much scared of a Terminator 2 situation, but there are some things that are concerning—lots of answers that we don't have yet. There are questions and challenges that haven't been solved, and I hope the people listening to the show and people here at Red Hat, of course, together can solve and have answers for in the not-so-distant future. Because I feel like kicking the can down the road and putting those kinds of conversations off will only be to the detriment of the industry—of all the different industries that are tied to the tech, which is literally every industry. And what I don't want is a situation where we are focused on getting things out and getting things to market and not focused on getting the right tools and getting them in the right places. That's kind of what I'm thinking. That's kind of my big takeaway. Trust me, this isn't the last time we will talk about AI on this show, but we will be saying goodbye to a couple of people. Angela has been our host since the very beginning. And then Johan and myself, who've been also voices in your ears. We'll be saying goodbye to all of those things and all those people. First off, Angela, I'm going to say it's been an honor and a pleasure having you on the show, working with you since the very beginning. It's hard to believe it's been four years, but yeah, four years. You're an amazing technologist. You're an incredible educator, a fierce advocate for all types of people from all different backgrounds to get involved in the tech industry and get involved in open source. And it's been a pleasure working with you. It's been a pleasure doing the show with you. Do you have anything to say to anybody before we sign off? I do, I do, and I feel the exact same way. You, Kim, thank you for being your wonderful self. And you and Johan both have been bringing so many great topics and great stories to our listeners, and I appreciate all the great work that you do. I don't know if they understand just how much work goes behind doing these episodes. And then Caroline back there—that's the strong, silent one in the back that you never hear or see. But from the very first episode, this has been such an incredible ride. Just the topics that we've talked about, the guests that we've had on here—they've just opened my aperture so much about what technology has to offer, and I think it's made me a better technologist. When we started this podcast many, many years ago, it was about answering those questions and demystifying tech. And I think we did a really, really good job doing that. It has been such an adventure for me. Thanks to our listeners, thanks to listeners who became friends that I met in person because of this podcast. I thank you. I thank all the listeners. We built this little community around Compiler, and it's going to continue. I'm just happy to have been a part of this journey. It's been an honor, and I can't complain—not one bit. So with so much gratitude, thanks for listening. Yeah. Johan and I are also taking a step back from the mic. Now we're going to be behind the scenes, still getting these stories, still getting these fresh perspectives on tech. But we are changing the way that we're making Compiler, and we are so excited for you to see what's coming. Yes. So you know what to do. You have to tell us what you thought about this episode. Some of the gems that our guests have dropped—what did you think about it? I mean, this was a really deep episode, if you ask me. So you have to hit us up on our socials at Red Hat, always using the #compilerpodcast. I appreciate you all for listening, and make sure you hit us up. We want to know what you thought about the series and this episode. And that does it for this episode of Compiler. This episode was written by Kim Huang. Victoria Lawton wants to know all of your AI-generated decorating ideas. And thank you to our guests, Marc Mangus and Akanksha Duggal. Compiler is produced by the team at Red Hat with technical support from Dialect. Our theme song is composed by Mary Ancheta. If you liked today's episode, please follow the show. Leave us a rating, leave us a review, and share it with someone you know. It really helps us out. And remember, do good, be kind, and use Linux. Until next time. Bye everyone. Bye.

re-role graphic

Featured guests

Marc Mangus
Aakanksha Duggal

re-role graphic

Re:Role

This limited series features technologists sharing what they do and how their roles fit into a growing organization.

Explore Re:Role

Keep Listening