Four AI trends to watch in 2024

As artificial intelligence continues to develop rapidly, the world is watching with excitement and apprehension — as evidenced by the AI buzz in Davos this week at the World Economic Forum’s annual meeting.

University of Toronto researchers are using AI to advance scientific discovery and improve health-care delivery, exploring how to mitigate potential harms and finding new ways to ensure the technology aligns with human values.

“The advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions,” says Monique Crichlow, executive director of the Schwartz Reisman Institute for Technology and Society (SRI). “Researchers at SRI and across the university are tackling how to build and regulate AI systems for safer outcomes, as well as the social impacts of these powerful technologies.”

“From health-care delivery to accessible financial and legal services, AI has the potential to benefit society in many ways and tackle inequality around the world. But we have real work to do in 2024 to ensure that happens safely.”

As AI continues to reshape industries and challenge many aspects of society, here are four emerging themes U of T researchers are keeping their eyes on in 2024:


1. AI regulation is on its way

U.S. Vice President Kamala Harris applauds as U.S. President Joe Biden signing paperwork.
U.S. Vice President Kamala Harris applauds as U.S. President Joe Biden signs an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023. Photo: Brendan Simialowski/AFP/Getty Images.

As a technology with a wide range of potential applications, AI has the potential to impact all aspects of society — and regulators around the world are scrambling to catch up.

Set to pass later this year, the Artificial Intelligence and Data Act (AIDA) is the Canadian government’s first attempt to comprehensively regulate AI. Similar attempts by other governments include the European Union’s AI Act and the Algorithmic Accountability Act in the United States.

But there is still much to be done.

In the coming year, legislators and policymakers in Canada will tackle many questions, including what counts as fair use when it comes to training data and what privacy means in the 21st century. Is it illegal for companies to train AI systems on copyrighted data, as a recent lawsuit from the New York Times alleges? Who owns the rights to AI-generated artworks? Will Canada’s new privacy bill sufficiently protect citizens’ biometric data?

On top of this, AI’s entry into other sectors and industries will increasingly affect and transform how we regulate other products and services. As Gillian Hadfield, a professor in the Faculty of Law and the Schwartz Reisman Chair in Technology and Society, Policy Researcher Jamie Sandhu and Faculty of Law PhD candidate Noam Kolt explore in a recent policy brief for CIFAR (formerly the Canadian Institute for Advanced Research), a focus on regulating AI through its harms and risks alone “obscures the bigger picture” of how these systems will transform other industries and society as a whole. For example: are current car safety regulations adequate to account for self-driving vehicles powered by AI?

2. The use of generative AI will continue to stir up controversy

A phone with the sentence, create art from words with AI.
Microsoft Bing Image Creator is displayed on a smartphone. Photo: Jonathan Raa/NurPhoto/Getty Images.

From AI-generated text and pictures to videos and music, use of generative AI has exploded over the past year — and so have questions surrounding issues such as academic integrity, misinformation and the displacement of creative workers.

In the classroom, teachers are seeking to understand how education is evolving in the age of machine learning. Instructors will need to find new ways to embrace these tools — or perhaps opt to reject them altogether — and students will continue to discover new ways to learn alongside these systems.

At the same time, AI systems created more than 15 billion images last year by some counts — more than the entire 150-year history of photography. Online content will increasingly lack human authorship, and some researchers have proposed that by 2026 as much as 90 per cent of internet text could be generated by AI. Risks around disinformation will increase, and new methods to label content as trustworthy will be essential.

Many workers — including writers, translators, illustrators and designers — are worried about job losses. But a tidal wave of machine-generated text could also have negative impacts on AI development. In a recent study, Nicolas Papernot, an assistant professor in the Edward S. Rogers Sr. Department of Electrical and Computer Engineering in Faculty of Applied Science & Engineering and an SRI faculty affiliate, and his co-authors found training AI on machine-generated text led to the system becoming less reliable and subject to “model collapse.”

3. Public perception and trust of AI is shifting

A person walking in front of a building that says, let's bring trust and AI together.
A person walks past a temporary AI stall in Davos, Switzerland. Photo: Andy Barton/SOPA Images/LightRocket/Getty Images.

Can we trust AI? Is our data secure?

Emerging research on public trust of AI is shedding light on changing preferences, desires and viewpoints. Peter Loewen — the director of the Munk School of Global Affairs & Public Policy, SRI’s associate director and the director of the Munk School’s Policy, Elections & Representation Lab (PEARL) — is developing an index measuring public perceptions of and attitudes towards AI technologies.

Loewen’s team conducted a representative survey of more than 23,000 people across 21 countries, examining attitudes towards regulation, AI development, perceived personal and societal economic impacts, specific emerging technologies such as ChatGPT and the use of AI by government. They plan to release their results soon.

Meanwhile, 2024 is being called “the biggest election year in history,” with more than 50 countries headed to the polls, and experts expect interference and misinformation to hit an all-time high thanks to AI. How will citizens know which information, candidates, and policies to trust?

In response, some researchers are investigating the foundations of trust itself. Beth Coleman, an associate professor at U of T Mississauga’s Institute of Communication, Culture, Information and Technology and the Faculty of Information who is SRI’s research lead, is leading an interdisciplinary working group on the role of trust in interactions between humans and AI systems, examining how trust is conceptualized, earned and maintained in our interactions with the pivotal technology of our time.

4. AI will increasingly transform labour, markets and industries

A sign that says, leave AI to SCI-FI.
A protester in London holds a placard during a rally in Leicester Square. Photo: Vuk Valcic/SOPA Images/LightRocket via Getty Images.

Kristina McElheran, an assistant professor in the Rotman School of Management and an SRI researcher, and her collaborators may have recently found a gap between AI buzz in the workplace and businesses who are actually using it — but there remains a real possibility that labour, markets and industries will undergo massive transformation.

U of T researchers who have published books on how AI will transform industry include: Rotman faculty members Ajay Agrawal, Joshua Gans and Avi Goldfarb, whose Power and Prediction: The Disruptive Economics of Artificial Intelligence argues that “old ways of doing things will be upended” as AI predictions improve; and the Faculty of Law’s Benjamin Alarie and Abdi Aidid, who propose in The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better that AI will improve legal services by increasing ease of access and fairness for individuals.

In 2024, institutions — public and private — will be creating more guidelines and rules around how AI systems can or cannot be used in their operations. Disruptors will be challenging the hierarchy of the current marketplace.

The coming year promises to be transformative for AI as it continues to find new applications across society. Experts and citizens must stay alert to the changes AI will bring and continue to advocate that ethical and responsible practices guide the development of this powerful technology.