LOADING

Type to search

Blog

Three ways ChatGPT could support strategic foresight

Avatar photo
A stylized illustration of a phone on a cloudy background. The phone displays a conversation between a person and a chatbot.
Share this

In late 2022, the world saw the release of ChatGPT, a powerful language-processing tool developed by OpenAI. As many started to experiment with ChatGPT to improve their workflows—or just write awful poetry—others began to raise questions. How was it trained? How could it reshape our information ecosystems? How might it change the way we generate and trust content? As a strategic foresight professional, I wondered if ChatGPT could support foresight work, which uses various methods of analyzing signals of change to envision plausible—and often disruptive—futures.  

I have identified three ways it could support foresight:

  1. Scenario development: With the right prompt, ChatGPT can generate potential scenarios about the future. For example, prompting ChatGPT to “write a story about a future in which…” can generate scenarios that can be refined with further prompts (e.g. “make it more dystopian”). These outputs can then be revised by human experts, and used to help organizations anticipate and prepare for a range of futures.
  2. Cross-impact analysis: This method of strategic foresight involves evaluating the potential impact of different factors on each other. By asking ChatGPT to explain how various factors might affect one another (e.g. “How might longer lifespans intersect with AI-driven automation in Canada?”), it can quickly generate a large number of ideas.
  3. Outlines: ChatGPT can propose outlines or even schedules for foresight reports, articles, and workshops, which could save time during the conceptualization phase (e.g. “Generate a schedule for a one-hour strategic foresight workshop on the future of social benefits in Canada. Include two activities.”).

However, I would also highlight a few key limitations:

  1. Inaccurate data: Text generated by ChatGPT often looks plausible, but can be rife with errors and inaccurate sources.
    Human experts should always review ChatGPT’s outputs before use.
  2. Outdated data: ChatGPT’s dataset is currently limited to information dating up to 2021. Asking ChatGPT questions about the future may generate responses such as, “I’m sorry, but I am not able to browse the Internet and my knowledge was cut off in 2021, so I am not able to provide you with information about current or future events.”
  3. Cone of plausibility: ChatGPT draws on data about how the world worked in the past. It thus generates future scenarios that are more similar to forecasted or expected futures. It may not generate possible scenarios based on novelty or new dynamics that are reshaping the world. The cone of plausibility with ChatGPT-generated scenarios is quite narrow and limited.

Although next-generation platforms are already integrating real-time information, ChatGPT’s current dataset is a considerable barrier to good foresight, which must draw on emergent information and signals to propose a range of alternative futures. While we may use ChatGPT to support certain aspects of foresight work, human analysts tuned into the real-time changes afoot may still be best suited to help organizations prepare for different futures, and make more informed decisions in the present.

Tags:
Avatar photo
Nicole Rigillo

Nicole is a Senior Foresight Analyst at Policy Horizons. She holds a PhD in Anthropology from McGill University. Nicole has led future-oriented research projects for the public and private sectors in the areas of health, technology, and governance. When settled in the present, she enjoys films and books featuring a strong female lead, wandering through Montreal’s green alleyways, and participating in local circular economies.

  • 1

You might also like