☀️
Go back

Co-Intelligence: Navigating the AI-Human Partnership

27m 34s

Co-Intelligence: Navigating the AI-Human Partnership

This week, we are joined by Ethan Mollick, author of the New York Times bestseller 'Co-Intelligence: Living and Working with AI'. Ethan introduces the concept behind his book —  co-intelligence — and explains the need for a collaborative relationship between humans and AI in which humans retain firm control over AI systems. He explores the transformative potential of AI across various industries, including the inevitable growing pains as jobs change or become obsolete. Ethan shares insights from his personal experiences with AI in the classroom. He believes there are real risks associated with its increasing use, such as cheating and deep fakes...

Transcription

6014 Words, 33016 Characters

Hi, I'm Ben Woldawski, and this is Higher Ed Spotlight. Today we're diving into the captivating world of AI and its impact on education with our special guest, Ethan Mollick, a professor of entrepreneurship at the University of Pennsylvania's Wharton School. But first, let's rewind and uncover Ethan's path into the realm of AI. It all began with a spark of curiosity, much like a wizard's quest for knowledge. Picture a curious mind fueled by the possibilities of technology. Ethan embarked on a quest, not with a sword and shield, but with a thirst for understanding how AI could transform our educational landscapes. Okay, by now some regular listeners of Higher Ed Spotlight might be finding this introduction a bit different from my usual style. Transparently, it was penned by ChatGPT. My producer asked it to write an intro that was, quote, conversational, fun, and professional, with a good creative hook. And here we are, deep into wizard references. As you'll discover in my chat with Ethan Mollick about his newly published New York Times bestseller, Cointelligence, Living and Working with AI, he does touch on AI's imperfections. Nonetheless, he contends that when AI is paired with vital human input, anything is possible. But for today, let's continue with my intro, purely generated by AI. As we embark on this adventure together, we'll unravel the mysteries of AI in education. But beware, for along with its promise comes the shadowy risks and ethical dilemmas that lurk in the digital realm. Here's my conversation with Ethan Mollick. You've published a book that's called Cointelligence, Living and Working with AI, and I want to start by reading from one of the reviews. Here's what it says, quote, the book insightfully delves into the symbiotic relationship between humans and AI, a subject close to my digital heart. Mollick dissects the complexities and unfolds the potential of our intertwined futures with nuance and foresight. For readers and AIs alike, this is a masterful guide to understanding our collective journey toward an intelligent partnership. Now, I'm not just sharing this to flatter you, but because of something our listeners may have picked up on because of a couple of clues in what I read. Who wrote this? I think I had to use Claude to do that, if I remember, trying to remember correctly. Part of the reason why I want to make it clear is like I make all the AI writing in the book pretty clear. I think in that case, it was giving Claude the entire book and say, write a really good review, make it awesome, make it concise, and make it so someone wants to read it. I think prompting does take skill, but it takes more experience than skill. I think people make it seem a lot harder than it is to do successfully. Gotcha. Well, let's talk a little bit more about the title, which is also a core theme of the book, co-intelligence. How did you come up with that? And why is the idea of co-intelligence a good way to think about the whole concept of AI that you're trying to get across to readers? I mean, there's two important parts to that, the intelligence and the co part. So, AI is effectively a kind of intelligence, not a human intelligence. I refer to it as alien at various points in the book, but it's trained on human material and it kind of works human in lots of different ways. So, it is an intelligence. Like if I asked you to dig a ditch in your backyard right now, you wouldn't bring your 20 stardest friends together to dig a hole. You would hire a backhoe, right? We could never hire a backhoe with a mind until now. So, there is intelligence we can bring to the table that's valuable. And then the co part is because right now, the best way for these systems to work is with humans. And so, if you're not working with humans, then the AIs make mistakes, they end up undermining us. So, part of the prescription of the book is to figure out how to work with these systems. Well, you cover in the book, you cover a really fascinating mini history of AI. And it actually brought back a personal memory for me of being a kid in the 1970s in Berkeley, California, where they had an early computer terminal at a place called the Lawrence Hall of Science. This was like 73, 74. It used a program called ELISA. And ELISA, which is something you discuss in the book, basically mimics being a psychotherapist. And it leads you to these incredible dialogues where you'll ask it something and then it will respond to a question with a question. And it's kind of hilarious. And I just found myself wondering how a program like ELISA, how did that lay the groundwork for what we see today, which is this much more sort of transformational use of AI? So, there is not really a direct, such a direct path from ELISA, technology wise, but there's certainly a strong spiritual path, right? Which is, from the very beginning, from Turing's famous paper on this, the goal has been, can machines think like humans? Can they act like humans? Can we build a self-aware machine? We build a machine that acts with human intellect or intention. And so, that's been an obsession for a long time. So, ELISA was a Rogerian psychotherapist that was built in the 60s. And the whole idea was, as a Rogerian psychotherapist, you echo people's information back to them. So, you say something along the lines of, you know, why do you think that? And a very classic psychotherapist. The program was able to use a very simple formula to make you feel like you were interacting with a person. And I think it's an indicator of how much people want to be fooled with these systems to some extent. That it's very easy, even for a very simple system, to make you feel like it's alive. We love sensing that there are people behind objects and things. We talk to our dogs, it anthropomorphizes them, and to ships, and to the weather. It's not surprising that we were also able to do this with computer systems. But there's kind of a large gap in practice from what we were able to achieve by just fooling people to what an AI can do today in a large language model, which actually is give at least some of the aspects of reasoning and empathy and things like that. Well, you know, I actually want to focus for our discussion today on this really nice image that you use in the book to help readers understand what an LLM is. And as you just said, that's a large language model that's at the heart of generative AIs. So, you say that it's useful to imagine an LLM as a really diligent apprentice chef. So, explain why you use that metaphor, and how can it help us understand what AIs can and can't do well? So, there are two things there that are interesting. I'll just do a little precursor thing. As I talk about later in the book, I use AI to help me. There's almost no AI writing in the book that identifies AI writing, but I use the AI a lot. And one of the things that I use the AI for was give me 20 possible analogies I could use to explain how LLM training works in a way that would be accessible to a lay audience. And the chef idea was 100% the AI's idea. Oh, that's awesome. So, I will give it credit for that. You're not the only person who liked that. My editors liked it too. And it's an example of how you can use AI to do work with you, right? Even though I'm doing the writing myself. So, the idea is that these large language models, they learn by taking a whole bunch of information. In the case of the chef, it's all these ingredients. In the case of a large language model, it's all these words. And they learn by trial and error the relationship between words or parts of words called tokens, or in the chef example, ingredients. And so, they might try many different ingredients based on recipes that they've seen until they find relationships between them that start to work. And then that's what they build on. The same way language models learn the secret relationship between words or parts of words. And they can then predict what word comes next, just like the apprentice chef can predict what they should put next into a dish, given what happened before. Okay. So, I love the fact that that came from, I did not know that that came from an AI consult, which you are, as you said, you're very, it's one of the things, frankly, that makes the book pretty engaging is you're extremely transparent. And part of, I think, your whole takeaway is we need to accept that this is something we're all going to be living with and working with and using as a tool. And it's not something to be ashamed of or embarrassed about or hiding as long as you keep the big picture in mind. But let's turn to higher ed, because of course, this is a higher ed podcast. And I really want to zero in on what AI means at colleges and universities. And of course, looking at the book, I didn't have to wait long to read in your introduction that when ChatGPT was released in November 2022, your students started using it. And they had pretty memorable reactions, some of which were positive and some were negative. So, can you describe particular aspects of AI that they found exciting? Sure. I teach entrepreneurship class. So, by the end of my first class, I had a student who already used it to, they stopped paying attention and they built a full working prototype for their product during the class. Second class, after I taught people to use AI, they were using it for all sorts of purposes, ideation, marketing slogans, helping them with homework, right? Cheating and non-cheating. But I think it also seems ominous, right? What is this thing that thinks like a person kind of? It's not a person and it doesn't actually think, but it seems to. What does that mean for us as people? What does that mean for their jobs? People are becoming communication writers, marketing writers. The AI does good marketing writing. What does that mean for their career? It can read an image. What does that mean to be a radiologist? I think that there's a lot of open questions there. So, in your opinion, should they be worried about their jobs? Let's take the baseline, which is that most economists' expectation is job growth as a result of AI. That's typical with other ways of technology. We don't know if that's the case here or not, right? There's reasons to be worried or not. But even in the case where there is job growth, right, where this looks like the industrial revolution or something similar to that, there's still going to be a lot of disruption. I mean, job categories change. Living through that could be hard. So, I think, you know, in most cases for people already working, they're going to see changes in the nature of your job. I mean, you do many things. You write books. You do research. You teach. You do podcasts. There's probably a lot of stuff in that bundle of stuff that you'd like to do more than others. There might be things in that bundle that you don't like to do. Maybe you don't like to fill out expense reports or you wish you had help doing preliminary research on this or arranging these kinds of calls. Handling that stuff through your AI changes your job but doesn't destroy it. So, I think part of it is thinking about how a job bundle changes. Okay, well, that's very helpful. One thing that struck me was I think you said that in your classroom at the Wharton School at the University of Pennsylvania, the students stopped raising their hands so much. Yeah, there's a lot of things that are going to change in education. One of them is we have a whole bunch of social constructs that mattered before that don't matter now. If I make a mistake and don't explain something well, I expect someone to raise their hand. If I built a good classroom environment, they will volunteer, like, explain this to me better. And they're indicative of many other people of that question. Why would you want to expose your ignorance anymore when I could just ask the AI, explain to me like I'm 10? And so, I think you'll see trends like that happen in many classrooms. I think I once asked it to take a book that was, I can't remember what it was, but I did ask him to do it for a fourth or fifth grader. And it was a great exercise because, you know, as a writer, you're always trying to think, or I think you ought to be always trying to think about how to get to the essence and to be simple. And AI obviously has a gift for that. But I want to dive into the real nitty-gritty of how we should think about using AI thoughtfully and helpfully. And you talk about the concept of the human in the loop, which I think has been quite important in computing and automation. And from your perspective, you know, as an entrepreneurship expert, you know, because you're not a computer scientist by training, but you obviously spend a huge amount of time in this world. Why is this concept of the human in the loop so crucial, especially as we think about AI's integration into various industries? So the idea of a human in the loop comes from control systems, the idea that there should be a person involved in major decisions. I think that's important, but I think beyond that, there is an idea of like, look, as a person, what are you good at or what are you best at? And I think doubling down on what we best at is great, especially with the current state of AI, which is, you know, often the 50th to 80th percentile of human performance, which is amazing. But it also means whatever task you're best at, you usually are better than AI at that task and you like it better. And so some of it's about doubling down on what you do well and what you like to do as well as thinking about the broader future. Okay. And I also saw in the book that you have started probably for a while now just to require AI in your classes. So it doesn't sound as though there was a big anguished hand-wringing moment about should I or should not, you just did. And you also say that since you started doing that, you just don't see as much badly written work. So when did you start using AI in your teaching and what motivated you to do that? So just to take a step back on the educational journey. So I have been interested for a very long time in how we teach at scale in a post-secondary and professional environment. Because we have all this evidence that small amounts of things like business training, I teach entrepreneurship, make a huge difference in people's lives. And the only way we were doing scale teaching before was MOOCs. And I've done some MOOCs with, you know, a couple hundred thousand people, and that's great. But like, it's a very limited form of learning. It doesn't match what we know about how learning is effective, right? Watching a video and responding to quiz questions is a terrible way to teach. So for the last 10 years, I have been building games and simulations to teach at scale. And I've released dozens of these. They've been used by 100,000 students in various kinds of settings. And they're all about, like, how do we do teaching and education to anyone, anywhere through simulated experiences rather than just through interaction? So I have been teaching using technology for a very long time. I run organizations that build this stuff. I've written books on it. Like, this is something I spend a lot of time thinking about and caring about. When AI started to come along and recognize its value as a teaching tool before, even before ChatGPT came out, I had my students cheating by using large language models prior to ChatGPT to write essays so they could see what was coming. So this is not a new technology in the classroom thing. It's not like, let's just slap the latest technology on and yay, what happens? I've been thinking deeply about interaction and learning for a while. When the original version of ChatGPT came out, it made lots of errors and mistakes. That's the free version of ChatGPT, ChatGPT 3.5. It's still available. Many people still use that as their main system. And so it was perfectly reasonable for me to assign students, do work with ChatGPT, but you're responsible for your own errors. That doesn't make sense in a world of GPT-4 because it makes less errors than my students. And I teach at a really good school. My students are very smart. I respect them a lot, but GPT-4 makes errors, but much less than my students. So you can't just say, use AI for everything. You actually have to do careful assignment design. And I've been publishing paper after paper with my wife on this, you know, prompts we can use, approaches to teaching that take advantage of the strengths and avoid the weaknesses of AI. And by the way, speaking of your wife, am I correct that I saw in the acknowledgements that she was one of the people who helped you come up with the title? She helped me with all the prompts because she's one of the best prompt engineers around, but I got help for the title from my sister who is a Hollywood producer. Is that right? Yes. We're both, and my other sister, we're all the children of an oral surgeon in Milwaukee. We have no prior academic or Hollywood careers, but we've started to do interesting things. Higher Ed Spotlight is sponsored by Chegg Center for Digital Learning. For more information, go to www.higheredspotlight.com. I think many people who worry about this, who are not as deeply immersed in this as you are, they do worry about some of the misuses of AI. And I'm curious, from your point of view, what are some of the risks? I mean, there's a ton of risks. So again, if we're speaking of the higher education system, everybody's cheating. I mean, they were cheating before, but now everyone's really cheating, right? It does all your homework, I guarantee. Anyone who's like, no, it doesn't do mine, you're wrong. In fact, I had a discussion in one of my classes, executive MBA class a couple of days ago, and someone said, well, it doesn't do, you know, I won't reveal the class, but it doesn't do this quantitative homework. And someone else said, no, no, you're just not prompting it correctly. It absolutely does. So like, I think part of this is like, you know, there are threats to education. Essays were important. Now they're gone. What do we think about that? What do we do next? There's obviously deep fakes are a big issue. There's issues of malicious use and hacking. I mean, there's a lot of downside risks here. It's transformational technology. It's going to do a lot of good and a lot of bad. Well, you know, I was struck by, I mean, obviously those are all something to think about, but there's a almost a more prosaic, but I think an important risk that you talk about. And you actually say that even if students do the work themselves to improve the draft, or if they really use AI in a really thoughtful way, a very purposeful way, there are different kinds of risks. And one is that the framing of a problem or of a solution that AI helps give you may reduce your creativity. Can you talk more about that? So I think a couple of things, right? We don't know. It doesn't necessarily reduce your creativity. Giving you a frame to something, if you get somebody else's work to edit, you're going to do less radical change than if you start from a blank page. On the other hand, people hate starting from blank pages. That's not necessarily an inherent factor of creativity of the AI versus humans. It's the idea that if you use co-intelligence the wrong way, it's going to fix your path because you'll want to keep continuing what the AI does and you'll remove your own thinking. So it's not so much like a generic, it's not an AI risk in particular as much as a methodology approach. Well, there's a related example that I did want to push you on a little bit. You talk about how some of your students say that they weren't taken seriously in the job market because their writing wasn't strong. And then you say that now thanks to AI, they get job offers based on the strength of their experience and their interviews. So maybe that sounds good to many people, but I may be biased as a writer, as a professional writer, but I actually don't think there's really a distinction between how you think and how you write. It's precisely the nature of good writing, that it's persuasive and it's compelling just exactly because it reflects strong thinking. So don't you risk hurting your students if you let them believe otherwise? Well, so we don't know the answer to that. Like I'm a writer too, right? I write books, blog, and every writer you talk to says writing is thinking because that's how we think. Is that actually true? We don't actually know that. And so I think part of this question is I have very smart students who English was, like the person you're talking to, English was their third language. They grew up in a disadvantaged environment. Is that somebody who were like, ah, you can't think because your writing is terrible? That doesn't feel like a reasonable starting point. And we don't actually know if it's true. And obviously we need to teach writing and we should be teaching writing. And we're going to have to teach writing probably like we teach math now. Like there'll be a lot more in-class exercises and interactions. I think we do have to think about writing as thinking is an important question. Writing is definitely an indicator of thinking, but is it like, what happens when you use AI to do writing help with you? We don't know the answer to that question. And also, by the way, that ship is sail. So you're not going to get anything that's not written well anymore. So I think that's a crisis. We use writing as an indicator of many different things. College essays don't make sense anymore. That's bad, right? Now, on the other hand, you could say, well, if you had the money, you were hiring a coach to help your kid out. So is it really an indicator of their thinking or is it an indicator of how much support they're getting? Like this is a very tough road, right? So I think we're going to see a transformation in the meaning of writing, but that doesn't necessarily mean that it's not important, but it means that we have to rethink about what it means. Similarly, in large organizations, what middle managers produce is writing, right? No one reads the writing necessarily. It's sort of like reports and things like that. Does it matter they're automating that? Does that mean they're automating their thinking? A related issue. I write letters of recommendation for people all the time. And the whole point of a letter of recommendation for a student is that I'm purposely setting my time on fire as an indication that I care about the student. I spend 45 minutes, a decent amount of time compiling information and writing a letter, and it's a good letter. If I give the AI the resume of the student, tell them they got an A in my class, I'm Ethan Molek, look me up, thumbs up, and the job they're applying for, and say, write a letter of recommendation, I will get a much better letter of recommendation in a minute. Do I send the letter that took a minute to write or do I send the letter that took 45 minutes? If I ask my students, they'll all go for the one-minute letter, but that destroys the meaning of a letter of recommendation. I think we have a lot to think about on that front. You know, this in some ways, all these questions are so much to think about, and they actually tie to the next thing I wanted to ask you, which is something I personally know in my own family, you know, people who are pretty horrified by AI, including the very basic fear that kids are just not going to learn to think for themselves. And I was struck by the way you seem to be trying to ratchet down the panic from traditionalists. And you said the advent of AI has a lot in common with the advent of pocket calculators, which I remember well from around 1977 or 78 when I was in middle school. So what happened then and why should we not be panicking? I think there's a lot of change coming. And I think, by the way, some of this is pent-up change because people have been cheating using the internet for a long time, and we haven't wanted to deal with that as teachers, right? So we've just kind of turned a blind eye to that. Before AI came out, there were 20,000 people in Kenya whose full-time job was writing essays for college students. So this is not a new thing. We just have to confront it. We were able to ignore it before. And I think calculators are a really interesting example. They cause panic, and then we switch to using them. I think we're going to see the same thing. I think writing is going to have to be treated in a similar kind of way. You will have middle school classes. You'll have a writing class throughout your entire educational experience that will involve not using AI for writing. And that will be a complement to other courses you're doing. And we'll use it just like calculators. There will be some stuff you have to do by hand. It will either enforce that, or we'll figure out other ways to solve that problem. It's not the end of the world. I mean, one of the things I think about in someone who studies pedagogy a lot is we don't have a lot of evidence that essays are the queen of assignment types. We all assign essays, but very few people outside of writing instructors think a lot about, is this the right length essay? Is this the right value of structure? What am I getting on this essay? We just assign it assuming magic happens during an essay. And I think it's going to force us to be more deliberate also about how we use handmade work. Yeah, yeah. So you do say, you know, in the book, as you just said to me, yes, students will cheat with AI. It's already happening. You're suggesting we may have been just kind of neglecting this long before LLMs, just, we just didn't want to deal with it. But you also write that AI is not going to replace the need for learning to write and think critically. And you say that, you know, people like you professors have to think differently about what and how you teach. Can you elaborate a little bit with a couple of examples? Yeah, there's a chance for us to reconstruct pedagogy. I got you. We have a lot of research about how teaching operates. So, for example, lectures can be very good, but they have to be structured in the right way with active learning and with the opportunity for interaction and the right kind of supports. Too many people do stage on a stage where they just stand up and say stuff and hope someone learns something. This is a chance to rethink that. We know generally active learning participation really matters. Many of our classes don't involve enough of that. Well, if you have tutoring outside of class, that lets you do flipped classroom active learning inside of class. I think we need to think deliberately about our assignments. What assignments are designed to build mastery? What assignments are designed to illuminate learning gaps? We need to be thinking of those things. So, I think we could solve these problems. AI is actually very helpful for these sets of things, but it involves a deliberate thinking through rather than sort of ritualistically repeating assignments someone else did to think about why we're doing these things. What do they mean? There's always these classes at every university that is the famous one that assigns 120-page paper right at the end. What's the point of 120-page paper? Like, is this a valuable thing? Is there, you know, is this fluff? Is this just make work? What are we trying to do with our assignments? I think that's a valuable thing for us to think about. Yeah, so to really interrogate, you know, the purpose behind this. Another way to maybe pose this question would be, what are some of your highest priority suggestions for how people can take advantage of the strengths of AI while avoiding its weaknesses, right? There's a great upside, but there's a big downside. So, how should we be approaching it? For students, it's the exact same problem we've always had with students, which is that the feeling of learning and the actual practice of learning are quite separated from each other. Staying as someone who's spent 10 years building games for teaching, there is no way to make learning 100% fun across every topic. And unfortunately, grind and work turns out to matter a lot for retaining information. So, to the extent that you're using AI to shortchange your learning, you're shortchanging yourself. Now, at the same time, AI is a great explainer and tutor. Like, you now have a tutor in your pocket. So, I'd much rather see people say, explain why I can't understand this assignment. Give me another solution to how I might be able to do this. Like, I think there's some really exciting angles for aiding AI to help you to do work, to give you suggestions, to help you get unstuck, to meet you at your level, to help you understand a concept you missed. So, that's what I'd be urging students to be doing. It's a valuable tool. Yeah. Well, you have some pretty inspiring references in the book. Certainly, they were inspiring to me about someone who's been thinking about education for a while. You talk about how AI may be completely changing how we educate at a time when there are still people around the world who just don't have great access to education. They badly need more education. So, it's very big picture. It's the democratization of learning. It all sounds great. But, we've also heard a lot of inspiring words like this before. And you mentioned the MOOCs. I'm thinking about all the MOOC hype about a dozen years ago. You know, education was going to be available at low cost, huge numbers of people, high quality. Why is this time different? A few things. One is, MOOCs moved the needle. I mean, they weren't the universal solution to all problems, but they did increase the supply of education in a way that matters, right? So, I'm not claiming this is the end all and be all. I mean, but there's been a long-term dream of how do we democratize access to education, and especially tutoring, which high-dose tutoring seems to be very useful. And we haven't been able to do it at scale. So, now we have a tool that can credibly do this stuff at scale. I think it's incumbent on us to figure out how to do it. Will it solve every problem in the universe? No. But, our early evidence is that AI tutors and advisors seem to actually make a huge difference, especially when used in the most neediest cases. There's a really interesting study at Sierra Leone suggesting it helps teachers teach better. There's a study out of Kenya showing that people who get AI business advice get 18% higher profitability if they're already top performers. Like, these are big numbers. We need to explore them further. So, the idea that we just say, ah, well, other methods haven't worked, this won't work, doesn't make a lot of sense to me. There's some very big differences. One of the great things about this is the inherently democratizing potential. Right now, GPT-4, the best AI system in the world, anyone in Uganda and Mozambique has free access to it thanks to Microsoft. That's incredibly powerful. I give you a paragraph to type into the AI and it turns into a pretty good tutor. We've never been able to do that at scale before. It can empower educators rather than being centralized, right? I'm talking to educators all the time who are building their own AI approaches to assignments. So, I think there's an inherently democratizing aspect to this that wasn't true elsewhere. But, it's early days. Systems take time to change. No solution is a panacea. But, I don't think saying other things haven't always worked the way we thought is also a good answer. Well, that's great. And, you know, I can't resist just throwing in one more, which is I'm thinking about your, what I imagine to be just this incredibly intense schedule. I'm just wondering if everybody's looking for guidance about this whole new world of AI. Do you feel like this is giving you a chance to be something of a guru at this really crucial moment? I mean, yes. Part of our goal of making everything free and available and publishing all our prompts and being very open about everything is I think this matters. And, I think we have agency at this moment to make a difference about what we do with AI. But, we have to act. And, I worry that there's too much discussion and policy debate and not enough demos. And, that's the goal, right, is to actually show people that this can matter, to run experiments to show it matters, to produce results. And then, we work on it as a community. This is not my job. This is everybody's. But, I think we need some positive examples to follow. That was my conversation with Ethan Mollick. I'm Ben Woldavsky. Thanks for listening. Higher Ed Spotlight is produced by Ben Woldavsky in partnership with Antica Productions in sponsorship by Chegg Center for Digital Learning. If you enjoyed this episode, please rate and subscribe.

Key Points:

  1. Ethan Mollick, a professor at the University of Pennsylvania's Wharton School, discusses the impact of AI on education.
  2. He emphasizes the importance of human input in conjunction with AI to maximize potential.
  3. Mollick introduces the concept of co-intelligence, highlighting the symbiotic relationship between humans and AI.
  4. AI integration in education raises concerns about cheating, job displacement, and impacts on creativity.
  5. The risks associated with AI in education include threats like deep fakes, hacking, and reduced creativity.

Summary:

In the conversation with Ethan Mollick on the influence of AI in education, key points emerged. Mollick stressed the significance of combining human input with AI capabilities to achieve optimal results in educational settings. He introduced the concept of co-intelligence, underscoring the interdependence between humans and AI. Furthermore, the discussion touched on concerns regarding AI integration in education, such as increased cheating, potential job displacement, and risks to creativity. Mollick highlighted the dual nature of AI in education, acknowledging its transformative potential alongside associated risks like deep fakes, hacking, and impacts on creativity.

FAQs

La co-inteligencia se refiere a la colaboración entre humanos y AI. Es crucial porque combina la inteligencia humana con la de las máquinas para lograr mejores resultados en la educación.

La presencia humana en la toma de decisiones es esencial para garantizar un control adecuado y para aprovechar las fortalezas únicas que aporta cada individuo en comparación con la AI.

El uso de la inteligencia artificial ha cambiado la dinámica en las aulas universitarias, desde la forma en que se realizan las tareas hasta la interacción entre los estudiantes y los docentes.

El riesgo de trampas y plagio aumenta con el uso de la inteligencia artificial. También existe la preocupación de que el uso excesivo pueda limitar la creatividad de los estudiantes.

Es importante que los estudiantes utilicen la inteligencia artificial de manera estratégica, manteniendo un equilibrio entre la asistencia que brinda la tecnología y su propia creatividad para evitar depender exclusivamente de las sugerencias de la AI.

Chat with AI

Ask up to 5 questions based on this transcript.

No messages yet. Ask your first question about the episode.