Sense-making Futures: A crisis of certainty

It is getting harder to distinguish between what is real and what is fake on the Internet. The strategies people used to evaluate the reliability of traditional print and broadcast media do not always work with today’s information technologies. These technologies enable new forms of expression and make it easy to create powerful forms of mis- and disinformation. 13 0,22

It is getting harder to distinguish between what is real and what is fake on the Internet. The strategies people used to evaluate the reliability of traditional print and broadcast media do not always work with today’s information technologies. These technologies enable new forms of expression and make it easy to create powerful forms of mis- and disinformation.

This scenario brief explores a future where most people cannot tell the difference between real and fake. The key policy implications of this scenario fall into five areas:

  • Communications: the distribution channels and strategies institutions and firms have relied on may break in a world where people distrust most information.
  • The economy: mis- and disinformation could drive economic volatility by undermining faith in the economic indicators used to evaluate firms and markets. It could also create a strong market for “certainty”.
  • Social cohesion: uncertainty about what is real and what is fake could change who and what people trust.
  • Accountability: it may become harder to discourage some forms of illegal or antisocial behaviour, if people distrust the evidence traditionally used to prove wrongdoing.
  • Democratic institutions: widespread mis- and disinformation could make it harder to trust democratic institutions and lead some people to support authoritarian movements..

Anyone who engages with the following areas might find this brief relevant to their work: security; environment; economy; research and development; health; reconciliation; rights and social justice; information ecosystems; education and training; culture and arts; social cohesion, identification and privacy; international affairs; and governance.

© His Majesty the King in Right of Canada, 2024

For information regarding reproduction rights: https://horizons.gc.ca/en/contact-us/

PDF: PH4-202/2024E-PDF
ISBN: 978-0-660-71303-8

DISCLAIMER
Policy Horizons Canada (Policy Horizons) is the Government of Canada’s centre of excellence in foresight. Our mandate is to empower the Government of Canada with a future-oriented mindset and outlook to strengthen decision making. The content of this document does not necessarily represent the views of the Government of Canada, or participating departments and agencies.

Introduction

Reactions to recent deep fakes generated by artificial intelligence (AI)—like the image of the Pope in a puffy coat1 show that it is getting harder for people to tell the difference between what is real and what is fake online. This challenge continues to intensify as trolls, conspiracy theorists, populist leaders, unethical PR firms, states, and malicious users of generative AI technologies flood the information environment with mis- and disinformation.

As discussed in Policy Horizons’ recent report, Sense-Making Futures (2023), this is just one of several technological and social disruptions to human sense-making. Collectively, these changes make it harder for people to be sure about anything that happens outside their personal experience. This appears to be a problem of misalignment. Inherited strategies for making sense of traditional media do not necessarily work with new information technologies that enable novel forms of expression and make it easy to create misleading or deceptive content.2

Making good sense was relatively simple in a world where information from experts like journalists, academics, and governments was considered reliable, where material culture and experiences were physical- rather than digital, and where most informational content passed through editorial filters before distribution.

Today, AI makes audio and video easy to fake and social media gives mis- and disinformation global reach. Virtual spaces and goods separate experience and value from material reality, blurring the line between authentic and fake. Black-box social media algorithms designed to stir people up rather than advance facts or encourage civil debate increasingly determine what information people encounter.

Over the next five years, the challenge of telling real from fake could take various paths. Sense-making tools and practices could catch up to information technologies, leaving most people no worse off than before. Alternatively, a golden age of sense-making could emerge if strong demand for certainty drives the adoption of new assistive technologies, digital literacies, and business models.

Then again, the rapid pace of technology-driven growth and innovation in mis- and disinformation added to growing social fragmentation and political polarization could create a crisis of certainty. That is the scenario this brief explores: a future where people are unable to tell what is real from what is fake because sense-making tools and practices have not kept up with the mis- and disinformation enabled by new technologies.

While this is neither the desired nor the preferred future, Policy Horizons’ strategic foresight suggests it is both plausible and potentially disruptive. Thinking about future scenarios helps decision-makers understand some of the forces already influencing their policy environment. It can also help them test the future readiness of assumptions built into today’s policies and programs. Finally, it helps identify opportunities to take decisions today that may benefit Canada in the future.

This is a future where many people cannot distinguish between what is real and what is fake because their information environment is full of synthetic and deceptive content. Such content is easy and cheap to create and almost impossible to detect with readily available tools. Here are five examples of what that future might look like:

  1. Traditional communications channels and evidence-based strategies relied on by institutions have broken down for three reasons. People distrust most information. They have abandoned the spaces where official messages circulate, like media networks and government websites. Most people only see information after it has been filtered or manipulated by layers of AI.
  2. A series of hoaxes involving deep fakes has undermined trust in familiar economic performance indicators for firms and markets. This is creating a strong demand for tools and services that can produce trustworthy economic information.
  3. New social divisions appeared as widespread uncertainty about what is real and what is fake grew. People are less trusting of traditional authorities and those outside their immediate social, faith, or ideological community.
  4. The blurry lines between real and fake have undermined confidence in certain types of evidence, such as photographs, and audio and video recordings, traditionally used to prove wrongdoing.
  5. Uncontrolled mis- and disinformation have weakened public confidence in democratic institutions and made political culture increasingly distrustful and confrontational.

Policy implications

The scenario outlined above would bring a range of policy-relevant challenges and opportunities. These implications fall into five policy areas: communications, the economy, social cohesion, accountability, and democracy.

The list of implications presented here is not exhaustive. Its purpose is to help policymakers broaden their models of the future. To this end, readers should ask themselves the following questions as they consider the implications:

  • How might widespread doubt about what is real and what is fake challenge specific policies or programs?
  • How would the assumptions built into today’s policies and programs hold up in the face of such uncertainty?
  • What actions could be taken now to maximize opportunities and lessen the challenges related to uncertainty in the future?

Communications

  • AI optimization may become the new search engine optimization (SEO) for those seeking to reach and persuade audiences. As AI tools become everyday partners in human sense-making, understanding how they sort, synthesize, and generate information may become the most important factor in designing successful communications.
  • Emotional factors may become even more central to effective communications as distrust of information makes people suspicious of rational or evidence-based messaging. Ethics may prevent some people from using fear, hatred, greed, or nationalism to get their message across. But they may find it hard to compete with actors who feel no such restraints.
  • Metaverses may offer new ways to connect with and inform the public through embodied experiences, sensations, and evidence-based modelling. They may also lead to new kinds of mis- and disinformation, not to mention significant technical, privacy, and security hurdles for the public sector.

Economy

  • Fake videos of CEO press conferences, misleading financial statements, deceptive press releases, and fraudulent ratings reports could shake confidence in stock markets, damaging businesses and the economy.
  • Firms and brands may struggle to sustain marketing campaigns, reputations, and market shares in the face of mis- and disinformation campaigns orchestrated by rivals or activist groups.
  • Market instability may create unexpected opportunities for newcomers to compete effectively against established players. It could also push investors to safe options, hurting innovative start-ups and small or medium enterprises hungry for investment.
  • Hostile states and other malicious actors could attack rivals with mis- and disinformation designed to undermine the base of knowledge supporting their markets and economies.
  • Uncertainty about the health of the economy caused by distrust of key indicators could create a strong demand for reliable information. This could inspire businesses and institutions to develop products and services that offer reliable data or protect against mis- and disinformation.
  • A “certainty sector” that trades in reliable information could emerge. This could enhance decision-making and lower anxiety among the public, with knock-on benefits for healthcare, democratic institutions, and the economy, among other areas.

Social cohesion

  • If the public embraces commercial AI assistants as the solution to uncertainty, the firms that build and sell AI software might become the ultimate authorities on what is considered fact, truth, and good sense.
  • Individual AI platforms may present distinct versions of “reality” for users thanks to programmer bias, differing data sets, or the agenda of a CEO. The existence of multiple misaligned “realities” could deepen social distrust and fragmentation.
  • Anxiety caused by uncertainty about what is real could lead many more people to embrace reassuring “truths” offered by a range of ideological, philosophical, or spiritual movements. Existing commitments to pluralism may not be enough to prevent new conflicts among groups devoted to competing “truths.”
  • Increasing disagreement over facts and values could undermine consensus on issues like reconciliation, climate change, and rights for lesbians, gays, bisexuals, transsexuals and queer/questioning people (LGBTQ). Subsequent social and political conflict could leave Canada less resilient in the face of future challenges.

Accountability

  • It may be harder to use evidence from automated surveillance systems to convict criminals in a world where fake videos are everywhere. Such systems may also lose their deterrent effect in both public and private spaces.
  • It could become much harder to hold public figures, such as politicians, celebrities, and CEOs accountable for unethical or immoral actions in a future where evidence of bad behaviour can be convincingly dismissed as ‘fake news’.
  • The same is true for people outside the public eye. A range of antisocial behaviours including harassment, bullying, vandalism, petty theft, and hate speech may become more common in a low-accountability future.

Democracy

  • Foreign rivals could use internal doubts about the integrity or health of Canada’s democratic processes to spark political conflict or undermine Canada’s international status.
  • Authoritarian or anti-government movements may grow when trust is low. Tax strikes, occupations, or illegal exploitation of Crown lands and resources might become more common, straining public safety systems.
  • A period of democratic decline and greater political conflict may have an upside. It could renew belief in the value of robust democratic institutions, leading to higher levels of participation and more demand for tools to verify political statements.

Conclusion

Many more people may struggle to tell real from fake in a future where new tools and mindsets fail to meet the challenge of technology-driven mis- and disinformation. This scenario creates a broad range of policy challenges. It highlights the growing role of technology companies and their products as intermediaries between governments and the people. It reveals new vulnerabilities to attacks from malicious actors. It indicates governments may struggle to get their messages across, which could make it harder to help people and businesses make sound economic, social, and democratic decisions. Opportunities are less obvious in a high uncertainty future. However, it is possible that rising demand for trustworthy information amid all the uncertainty could generate economic opportunities for businesses and institutions.

The future laid out in this scenario is not inevitable. New tools or commitments from tech firms and platforms may drastically reduce mis- and disinformation and its impacts. This could lessen feelings of uncertainty. Such advances might even usher in an era of radically improved decision-making that improves economic growth, repairs the social fabric, and re-energizes democracy. Yet the crisis of certainty explored above remains a plausible future. Overlooking it could leave policymakers unprepared for the potentially dire consequences of a world where people cannot tell what is real from what is fake.

Project team

Christopher Hagerman, Project Lead and Senior Analyst, Foresight Research
Jennifer Lee, Analyst, Foresight Research
Simon Robertson, Director, Foresight Research
Steffen Christensen, Senior Analyst, Foresight Research
Tieja Thomas, Manager, Foresight Research
Kristel Van der Elst, Director General

Communications

Mélissa Chiasson, Communications Advisor
Laura Gauvreau, Manager, Communications

Endnotes

  1. D. Bennet, “AI Deep Fake of the Pope’s Puffy Coat Shows the Power of the Human Mind,” Bloomberg, last modified Apr. 6, 2023, https://www.bloomberg.com/news/newsletters/2023-04-06/pope-francis-white-puffer-coat-ai-image-sparks-deep-fake-concerns.
  2. R. Rini, “Deepfakes and the Epistemic Backstop,” Philosopher’s Imprint, vol. 20, (2020): 1-16, https://philpapers.org/archive/RINDAT.pdf. For a contrary perspective, see J. Habgood-Coote, “Deepfakes and the Epistemic Apocalypse,” Synthese vol. 201 no. 3 (2023): 1-23, https://philpapers.org/archive/HABDAT-2.pdf.

Author photo