- Admission
- Programs
- Learning
- Community
- About
- Research
- Strategic Research Plan
- Implementation Plan
- Supporting Health and Wellness of Individuals, Populations and Communities
- Expanding the foundations of knowledge and understanding our origins
- Strengthening Democracy, Justice, Equity and Education
- Supporting Research Graduate Students
- Supporting Postdoctoral Fellows
- Valuing and Measuring Scholarly Impact
- Incorporating Indigenous Perspectives into Research Ethics
- Building World-Class Research Space and Infrastructure
- Involving Undergraduate Students in Research
- Supporting Early-Career Researchers (Faculty)
- Funding Research Chairs
- Reducing Administrative barriers to Research
- Implementation Plan
- Performance & Excellence
- Innovation
- Knowledge Mobilization
- Researcher Resources
- Institutes, Centres & Facilities
- Leadership & Departments
- Strategic Research Plan
- Dashboard
- Campuses
- Contact Us
- Emergency
What does the AI-enabled future look like?

Artificial intelligence (AI) is projected to have an outsized impact on the future of work, warfare, education, communication and other pillars of life. From classrooms to boardrooms, AI is a hot topic and its implications range from mundane smartphone apps to controversial autonomous weapons and vehicles. What does the AI-enabled future look like—and what does it mean for society?
“You have to know the past to understand the present.”
- Carl Sagan, scientist, astronomer, science communicator
Simon Fraser University (SFU) communications professor Stephanie Dick is a historian of mathematics, computing and the mind, with a focus on AI. Her research explores what AI is, what it does, and the histories that shape its character and potential today.
Recently, she was co-editor of an issue of the British Journal for the History of Science called Histories of artificial intelligence: a genealogy of power. The volume grew from a Mellon Sawyer Seminar of the same name based at Cambridge University, which she organized with six other researchers.
It is one of the first collections of scholarship on the history of AI and investigates its entanglement in systems of politics, power and control.
We spoke to Professor Dick about her work.
Your journal introduction mentions that AI operates across time and place as a means to consolidate power. Who are some of the largest power players right now in AI, and what is their agenda?
Numerous scholars have pointed out that in many ways, AI reproduces the logics of European colonialism—in the sense that resources and labour from around the world are being gathered up and consolidated for the enrichment of elites in predominantly white and European society. The data monopolies of Meta (Facebook), Alphabet (Google), X, Microsoft (OpenAI) and others have consolidated more data than most institutions, communities, or even nations could ever hope to gather. This pre-dated the advent of generative AI, but with generative AI, that data is more valuable and powerful than ever.
For all its talk of innovation, AI is in this sense very conservative. It consolidates more wealth and power in the hands of the same people who already had it—predominantly white male technologists—and increasingly subjects the rest of us to the constraints and possibilities of their imaginations decision making. AI not only consolidates more digital resources in the hands of these actors in the form of data but also uses increasing amounts energy, water and infrastructure to power these resource-hungry technologies.
Why do you consider the history of AI more of genealogy than a history?
There are many stories told about the history of AI and computing that feature a lot of lone genius inventors like Alan Turing and John McCarthy, and landmark events like the 1956 Dartmouth Summer conference for which the term “artificial intelligence” itself was coined. These stories are told by technologists and reproduced in the media. They have become almost myths—accounts that celebrate certain kinds of actors and certain kinds of inventions.
However, the history of AI and computing is not a history of the inventions of a small handful of scholars and technologists—they are the product of much more complex and entangled phenomena, including industrialization, the rise of the management sciences, and are only possible because of the labour of many often-erased classes of workers and actors. Academic historians seek to offer more nuanced and comprehensive accounts of where these technologies come from that do not reproduce the reductions and mythologizing of many insider and popular cultural accounts. Genealogy works well to capture these more pluralistic and complex senses of origins that we are trying to make more visible and accessible through this work.
Tell us about the “hidden labour” of AI. What is are some of the unacknowledged ways that human work is required to make AI-powered systems practical?
In the 1980s and 90s, many actors in Silicon Valley framed the early internet whether honestly or disingenuously as a ‘commons’—but then, in the bait and switch of generative AI, it became the resource base on which AI models would be trained. For example, we know from recent reporting that Mark Zuckerberg of Facebook explicitly allowed the use of pirated, copyrighted works in the training of Meta’s AI, and even had people write code specifically to strip that data, which was stolen twice over—first by piraters and then by Meta of the language of copyright.
I think we will see increasing legal action and attempts at legal protections against the use of artists’ work in AI training. There are also new forms of value and valuation being created by technologists who seem not to appreciate what art is on its own, only the value it can create for their companies by being made into data. This was a central issue in the most recent film industry strike and will continue to put many technology companies and artistic communities at odds with one another.
On the theme of hidden labour, we also hoped to expand on historical scholarship that explores whose contributions to the history of technology are celebrated and rewarded and whose are hidden from view. From histories of the erasure of women’s contributions to early computing to more recent studies, like Lilly Irani’s Justice for Data Janitors and Kalindi Vora and Neda Atanasoski’s Surrogate Humanity, the erasure of people’s contributions and labour has been an integral part of the creation of origin myths for these technologies. We invited this community of scholars to explore the labour landscapes of AI and to reflect on where erasure continued to happen and whose labour was made visible and valuable.
Other recurring themes in the history of AI include cognitive injustice and disingenuous rhetoric. Can you elaborate on what these terms refer to?
We wanted to think broadly about thematic frameworks that would bring scholars together and highlight different possible perspectives. We noticed the vast mismatch between the stories told by technologists and the realities that unfold on the ground. For example, the “democratizing” effects of networked communication or AI supports are often touted loudly, but on the ground we have plenty of evidence to suggest that many large-scale digital tools are in fact eroding democratic institutions, exerting all manner of control over people’s perspectives and actions, and increasing wealth inequality.
We were interested which technologists actually believe their own rhetoric about the promise of their tools, and when these stories are being told in bad faith by those who are obviously more motivated by personal profit and power than the social impacts they are selling. Disingenuous rhetoric was our framework for diving into the different ways that technologists sell their tools and what we can learn about how they map onto realities both present and historical.
Cognitive injustice was our framework for thinking about what is happening to very definitions of “intelligence.” In my own work, I have remarked repeatedly that it is, in some ways, incredible how quickly the capacity for “intelligence” was ascribed to computers in the 1940s and 50s given that, for centuries, many people from the white European world—from scientists to politicians to writers to members of the general public—proposed that people of colour, and especially those who were enslaved and colonized by Europeans were incapable of intelligence or “right reasoning.”
AI, and all the hype that surrounds it, reinforces very narrow visions of what constitutes intelligence that further marginalize and devalue the deep wisdom and pluralistic forms of reasoning and intelligence that we find in the vast human world. The many facets of this narrowing of “intelligence” to map onto Western and capitalist and bureaucratic priorities is what we hoped to capture by “cognitive injustice.”
You mention that most computer technologies offer ‘freedom’ and ‘flexibility.’ However, what these tools promise also depends on a loss of agency, control and freedom for many. What groups does AI disadvantage?
A vast and growing body of literature has explored how computational tools have reproduced social inequity in many ways: whether through historical bias in data that is reproduced by data-driven predictions being used in hiring, policing, court rooms, welfare and more or through the way that technology industries continue to create broad sexual and racial discrimination within companies and cultures themselves.
Moreover, AI expands and reinforces the empowerment of managerial classes over and against working classes: it represents a further expansion of “intelligence,” “knowledge,” and “knowledge work” to the principles and goals of management and industrialization—emphasizing efficiency and cost effectiveness. In the pre-industrial Western world, knowledge was seen as something that inhered in people, in context, in community—it was a facet of our interiority. But under industrial management, knowledge increasingly becomes a “product,” externalized and commodified. AI represents a culmination of this industrial and managerial logic, and it risks alienating most of us from what we know and how and why we know it.
Because AI is in many ways a deeply conservative technology that reinforces and reproduces historic bias and existing social order by further consolidating wealth and power with Western elites, it will continue to disadvantage those who have been disadvantaged within that system.
You recently co-hosted the 2024 Bruce and Lis Welch Community Dialogue, AI: Beyond the Hype—Shaping the Future Together. What are some of the common themes that emerged from the dialogue?
I was really interested in the survey that Fergus Linley-Mota and Aftab Erfan did in preparation for the event—Fergus presented on these results. It is very clear that communities in B.C. are concerned about the impact that AI will have on the already profound struggle for gainful work and lesser cost of living. It was also clear that a majority of those surveyed want to see AI being regulated, but had very little trust in industry or government to do so. I was heartened to see that 51% of those surveyed said they trusted universities and academics (much more than industry or government), but it is clear that broader issues of trust are plaguing our capacity to come to a collective vision of what our AI future should look like.
Daniel Barcay, my co-host, and I are also both really interested in the contentious concept of “social good” – who decides what is good for society and according to what principles? Any discussion we may have about the promise of AI requires that we navigate this contested ground. This raises interesting questions as well about how the sciences and humanities understand and learn from one another, and how the power to define social values is distributed across society.
Is there anything else you would like to mention?
AI has real promise, but it will serve us best in the hands of thoughtful, well-educated people and experts. Generative AI makes things up, it gets a lot of things wrong, it misses essential context, and much of this is because it has no access to the world, only to the data we give it which is always partial, problematic and limited. I am deeply concerned that the current financial crisis in higher education, the under-resourcing of teachers and schools at every level, is in part being justified by the promise that AI will be able to do all sorts of things we used to have to train people to do.
The opposite is true: AI will only serve us to the degree that we are educating ourselves. The best way to ensure a bright AI future and to make these tools as promising as possible is to invest even more in human education and training. We must optimize the human-AI relationship not, the AI itself. It cannot and will not be able to serve our changing and pluralistic world if its proliferation is paired with collective deskilling. It is only by investing in our own learning and expertise that the true promise of AI could be realized.
For more: See Abstractions and Embodiments: New Histories of Computing and Society, co-edited with Janet Abbate, view Stephanie Dick’s faculty web page and Research Expertise Engine profile.
SFU scholars can reach out to their faculty communications and marketing team for support sharing their work as a news story or on social channels. They can become SFU media experts or nominate their work for a Scholarly Impact of the Week profile.
SFU's Scholarly Impact of the Week series does not reflect the opinions or viewpoints of the university, but those of the scholars. The timing of articles in the series is chosen weeks or months in advance, based on a published set of criteria. Any correspondence with university or world events at the time of publication is purely coincidental.
For more information, please see SFU's Code of Faculty Ethics and Responsibilities and the statement on academic freedom.