Reflecting on ‘Social Issues in Computing,’ 50 years later

November 29, 2023 by Krystle Hewitt - Department of Computer Science

The 1970s ushered in the birth of modern computing, bringing about the invention of the floppy disk, the launch of Atari Computers and the first demonstration of the cell phone.

Allan Borodin standing with a bicycle.
University Professor Allan Borodin. 

Even though personal computers were not yet commonplace, and the dawn of the internet era was still a couple of decades away, University of Toronto computer science faculty already recognized the many applications of computing technology and the potential societal issues they presented.

When he joined U of T in the fall of 1969, University Professor Allan Borodin proposed a course, Computers and Society, which he co-developed and co-taught with the late C.C. “Kelly” Gotlieb, a fellow professor in the Department of Computer Science who is widely regarded as the “father of computing in Canada.”

Developing the course inspired them to write a book on the topic, Social Issues in Computing, which helped shape the emerging field of computing and society. It tackled complex topics that remain highly relevant today, including information systems and privacy, computers and employment, shifts in power and computers in the political process.

We spoke with Borodin as he reflects on this work and the rapidly evolving tech landscape, 50 years later.

What motivated you to propose the course, and then write Social Issues in Computing?

I suppose it had something to do with being an undergraduate and graduate student in the U.S. during the civil rights movement and the war in Vietnam and becoming more aware of the injustices. Lectures and demonstrations about the war often involved discussion about the role and responsibilities of scientists. So, although I was not actively involved, I was interested in social issues.

Kelly and I decided that we would co-teach the course we introduced, Computers and Society, which is still offered today. We taught this course together for a few years and while teaching the course, we thought that we had enough notes and ideas that we could write a text, and so we did.

In the book, you and Professor Gotlieb spoke about the shared belief in the positive aspects of computer use, writing, “our belief then, without dismissing present and potential dangers, is that computers have already contributed and will continue to contribute positively toward the solution of difficult social problems.” Fifty years later, what are your reflections on this stance?

I would say up until the last few years, I felt pretty much the same. I didn’t think much had changed. But events over the last few years have made me more worried about the negative implications of computing: the spread of misinformation, the amplification, the siloing, this divisiveness in society — online social media greatly facilitated that. We certainly didn’t envision online social media; I think that’s a real present problem.

Social Issues in Computing book on a desk in an office.
Social Issues in Computing was published in 1973.

For other threats, there’s been more danger in loss of privacy. People are willing to give up their privacy to some extent, sometimes not so willingly, but this still could be managed.

In terms of employment, so far things have evolved. Fifty years later, it’s not as if people are not working. The nature of work has changed. Many more people are in the service industry and office jobs than in manufacturing, and before that they were in agriculture. I think when people think that there’s going to be massive unemployment because of computerization, that would be worrisome, but I think we should be careful about that. I don’t see employment disappearing in the near future. But predictions are pretty much fraught with errors both in the short and long term.

The Social Issues in Computing blog created by John DiMarco to commemorate the 40th anniversary of the book drew several contributions from the computer science community celebrating the visionary work of this text and sharing their own insights. How do you feel about seeing your work resonate with so many people, and is that something you had expected when the book was first published?

You always hope as an academic that your work will have some impact. Predicting the impact, no, I don’t know how much we thought it would really catch on or not catch on, and I still don’t even know to what extent people have been directly impacted by the book.

Sometimes the greatest contributions are things that people just don’t have to even think about, because they become so understood that people accept it not even knowing where it came from. I don’t worry too much about it. I like to know that the work is being used, and I don’t even know today how much people refer to the book. But even if they don’t refer directly to that book, if there’s some sort of succession of books, if we’re part of that, even if without being mentioned it’s nice to know that it had some impact.

In your view, what are the biggest issues that society is currently facing when it comes to computers and technology?

The very immediate threat is this spread of misinformation by social media and bad actors. I think the impact on the divisions in society is really serious.

I don’t think we’re going to see dramatic employment shifts in the near future. We had a radical shift already with people working at home, but you see people coming back, so we’ll see what happens. And if it goes slow enough, I think employment for at least the foreseeable future will not be widely disrupted.

Privacy, the fact that we’re willing to give up and sometimes have to give up information, that’s another thing and that information should be guarded. That also means how to protect yourself against bad actors who are holding companies at ransom, not to shut down their operations. The ability to have secure computation is critical.

Then machine learning, what is real today and what is still far enough off? Maybe our concern about things, about machines, replacing us as humans, certainly in terms of our human intellect, I think that’s not happening but, other people who know much more about machine learning than I do think it could happen, so I don’t know. I don’t see it, though.

Also, the question about fairness in automated decision making, especially when you’re making decisions about social issues, such as who should get a parole, who should get mortgages? Ethical AI is not my field, but I can tell you as an outsider I do worry about fairness in decision making, because we can blindly trust these algorithms, even when we say we’re not. Nobody is a statistic, everybody’s an individual, and how do we use some information which you have to keep in mind and yet not let that totally determine these decisions?

We’re giving up decisions based on some statistical information, and to a large extent, that’s how the decision is made, somebody looks it over, and unless you get somebody who really is willing to buck the system and say, ‘No, I'm going to give this person a mortgage’ — then we’re giving up our decision making to algorithms. And we’d better be sure that whatever notion of fairness we can put into algorithms is there, but it’ll never be the same as taking into account something personal about the person.

We’re marking the 50th anniversary of your book but looking back at even just the last 10 years, we have seen some significant shifts in the tech landscape since then, particularly in AI — with much of it rooted in research that emerged from U of T’s Department of Computer Science. What makes this department at the University of Toronto a hub for technological innovation?

This department had three founders in different ways — Kelly Gottlieb, Tom Hull and Pat Hume. All of these people were quite visionary in their own ways. They were committed to Canada and to Toronto and they convinced us not by words, but by actions what a great department this would be to work in.

I think the overall environment of an excellent university, a superb department and a great city to live in, makes us reasonably attractive. When you have people like Steve Cook and Geoff Hinton, whether you’re in those areas or not, you know about them, their work is part of every undergraduate program and graduate program, and so our name’s there and we continue to attract excellent people.

We now have the MScAC program which I’m a part of, and that has helped us tremendously with our image in industry. They recognize that we are an invaluable resource for them because they need our graduates and I hope they say they need our ideas.

Toronto is a fantastic IT hub and that’s another thing that keeps us going.

We also have the ability to do things that certain other universities can’t really do as well, which is anything in relationship to the biological sciences. We have these hospitals right here, so that’s an advantage of being in a big urban centre.

So, I think it’s a combination of things. Very good initial decisions, building up the department. We hired a lot of people and over the years we supported AI even though there was a lot of controversy about AI in the early years about overhype. But we’ve always had really good people doing AI here — among them Hector Levesque and John Mylopoulos. We had a cadre already of very good people and then we attracted Geoff Hinton. Success breeds success, and we’ve been successful so far. We continue to attract people. I think the Vector Institute is making it very attractive to hire people in machine learning. We have all sorts of new programs and collaborations with other departments.

I would say there’s no one thing: success breeds success and a great environment, at the Department, University, city and country.

This interview has been edited for clarity and length.

Categories