As artificial intelligence (AI) continues to rapidly advance, there has been a surge in the development of AI-powered content creation tools like ChatGPT and Dall-e that offer users a range of personalized experiences. However, with this growth come concerns about the potential dangers and ramifications of such apps, from privacy concerns to the displacement of human workers.
For example, the previous paragraph was written by ChatGPT for this story, illustrating the blurring of lines between AI- and human-generated content. And the accompanying image was created by directing Dall-e to produce an image of “the University of Toronto in the style of van Gogh’s The Starry Night.”
In recent months, headlines have outlined on an almost weekly basis the issues relating to generative AI tools and content. Illustrators, graphic designers, photographers, musicians and writers have expressed concerns about losing income to generative AI and having their creations used without permission or compensation as source material; they also complain that the work is without originality, artistry or soul.
Instructors are having to cope with students submitting work written by ChatGPT and are re-evaluating how best to teach and assess courses as a result. Institutions like U of T are examining the ramifications of this technology and providing guidelines for students and instructors.
Some scientific journal publishers are requiring authors to declare the use of generative AI in their papers, while other publishers forbid its use entirely, characterizing it as “scientific misconduct.”
At the same time, it hasn’t taken long for the tone of headlines to change from dystopically fearful to cautiously constructive. Many experts point out that the technology is here to stay, and our focus should be on establishing guidelines and safeguards for its use; others look to its positive potential.
A&S News spoke with members of the Arts & Science community and asked them what they think about generative AI tools, and what we need to do about them.
Assistant Professor Ashton Anderson, Department of Computer Science
We are increasingly seeing AI game-playing, text generation and artistic expression tools that are designed to simulate a specific person. For example, it is easy to imagine AI models that play in the style of chess champion Magnus Carlsen, write like a famous author, or interact with students like their favourite teacher’s assistant. My colleagues and I refer to these as mimetic models — they mimic specific individuals — and they raise important social and ethical issues across a variety of applications. They affect the person being modelled, the “operator” of the model and anyone interacting with the model, and can either be used as a means to an end, e.g. to prepare for an interview, or as an end themselves, e.g. to replace a particular person with their "digital doppelganger".
Will they be used to deceive others into thinking they are dealing with a real person, say a business colleague, celebrity or political figure? What happens to an individual’s value or worth when a mimetic model performs well enough to replace that person? Conversely, what happens when the model exhibits bad behaviour; how does that affect the reputation of the person being modelled? And in all these scenarios, has consent been given by the person being modelled? It is vital to consider all of these questions as these tools increasingly become part of our everyday lives.
- For more, read Anderson and colleagues’ research article, Mimetic Models: Ethical Implications of AI that Acts Like You.
Professor Paul Bloom, Department of Psychology
What ChatGPT and other generative AI tools are doing right now is very impressive and also very scary. There are many questions about their capabilities that we don’t know the answers to. We don’t know their limits, whether there will be some things that a text generator is fundamentally incapable of doing. They can write short pieces, or write in the style of a certain person, but could they write a longer book?
Some people don’t think they’ll be capable of a task like that because these tools use deep-learning statistics; they produce sentences, then predict what comes next. But they lack the fundamentals of human thought. And until they possess those fundamentals, they’ll never come close to writing like we do. We have many things they don’t: we have a model of the world in our minds, mental representations of our homes, our friends. And we have memories. Machines don’t have those and until they do, they won’t be human — and they won’t be able to write, illustrate and create the way we do.
- For more, read or listen to the CBC The Current interview in which Bloom discusses whether AI can match human consciousness.
Associate Professor Paolo Granata, Media Ethics Lab; Book & Media Studies, St. Michael’s College
AI literacy is key. Whether something is viewed as a threat or an opportunity, the wisest course of action is to comprehend it. For instance, since there are tasks that AI does more effectively than humans, let’s concentrate on tasks that humans do better than AI. The emergence of widely accessible generative AI technologies should also motivate educators to reconsider pedagogy, assignments and the whole learning process.
AI is an eye-opener. The function of educators in the age of AI has to be re-evaluated. Educators should be experience-designers rather than content providers. In education, the context is more important than the content. Now that we have access to such powerful content producers, we can focus primarily on a proactive learning approach.
The Media Ethics Lab is at the forefront of digital literacy and will also be at the forefront of AI literacy. As a demonstration of that, we'll be offering AI in the Classroom as part of the Book & Media Studies program, a new fourth-year seminar taught almost entirely with AI tools. Students will develop skills in the use of AI, address a variety of issues concerning AI and its influence on society, and explore its potential for education.
Valérie Kindarji, PhD candidate, Department of Political Science
While public focus has been on the disruptive AI technologies themselves, we cannot forget about the people behind the screen using these tools. Our democracy requires informed citizens with access to high-quality information, and digital literacy is crucial for us to understand these technologies so we can best leverage them. It is empowering to have access to tools which can help spark our creativity and summarize information in a split second.
But while it is important to know what these tools can do to help us move forward, it is just as important to learn and recognize their limitations. In the age of information overload, digital literacy can provide us with pathways to exercise our critical thinking online, to understand the biases impacting the output of AI tools, and to be discerning consumers of information. The meaning of literacy continues to evolve with technology, and we ought to encourage initiatives which help us learn how to navigate the online information ecosystem. Ultimately, we will be better citizens and neighbours for it.
- For more, read the Globe & Mail op-ed, Digital literacy will be key in a world transformed by AI, written by Kindarji and Wendy H. Wong, Department of Political Science, the Munk School of Global Affairs & Public Policy, and the University of British Columbia.
Catherine Moore, Adjunct professor, School of Cities, Faculty of Music
Would seeing a credit at the end of a film, ‘Original score generated by Google Music,’ alter my appreciation of the score? I don't think so. Music in a film is meant to produce an emotional impact. That’s its purpose. And if a score created by AI was successful in doing that, then it’s done its job — regardless of how it was created.
What’s more, generative AI “composers” raise the questions: What is sound, what is music? What is natural sound, what is artificial sound? These questions go back decades, with people capturing mechanical sounds or sounds from nature. You speed them up, slow them down. You do all sorts of things to them. The whole electro-acoustic music movement was created by musicians using technology to manipulate acoustic sounds to create something new.
So, I see the advent of AI-generated music as part of a natural progression in the long line of music creators using new technologies with which to create and produce — in order to excite, intrigue, surprise, delight and mystify listeners the way they always have.
Assistant Professor Karina Vold, Institute for the History & Philosophy of Science & Technology, Centre for Ethics, Schwartz Reisman Institute for Technology & Society
The progress of these tools is exciting but there are many risks. For example, there’s bias in these systems that reflects human bias. If you asked a tool like ChatGPT to name ten famous philosophers, it would respond with ten Western male philosophers. And when you then asked for female philosophers, it would still only name Western philosophers. So, GPT-4 is Open AI’s attempt to respond to these concerns but unfortunately, they haven’t all been addressed.
Perhaps more importantly, in his book On Bullshit, [moral philosopher] Harry Frankfurt argues that ‘bullshitters’ are more dangerous than liars because liars at least keep track of their lies and remember what’s true and what’s a lie. But bullshitters just don't care. Well, ChatGPT is a bullshitter. It doesn’t care about the truth of its statements. It makes up content and it makes up references. And the problem is that it gets some things right some of the time, so users start to trust it — and that’s a major concern.
So, lawmakers need to catch up in terms of regulating these generative AI companies. There’s been internal review by some companies but that’s not enough. My view is there should be ethics review boards and even laws regulating this new technology.
- For more, read Vold’s ChatGPT: Rebel without a cause, on Daily Nous.