Foresight on AI: Policy considerations

Foreword
Artificial intelligence (AI) is rapidly evolving, presenting both opportunities and challenges for Canada. As AI continues to advance, it is crucial to understand its potential impacts on governance, society, and the economy.
Policy Horizons Canada (Policy Horizons) is dedicated to exploring how AI might shape our future. By engaging with a diverse range of partners and stakeholders, we aim to identify key areas of change and support policy and decision-makers as they navigate this dynamic landscape.
On behalf of Policy Horizons, I extend my gratitude to everyone who has shared their time, knowledge, and insights with us.
We hope you find this report thought-provoking and valuable.
Kristel Van der Elst
Director General
Policy Horizons Canada
Introduction
This Foresight on AI report complements numerous reflections on AI futures across the Government of Canada. It aims to support decision makers – involved either in AI implementation or in policy setting related to AI – by exploring factors that could shape the evolution of AI, in terms of technical capabilities, adoption, and use, and which might be “beyond the horizon.” The report does not provide specific policy guidance and is not meant to predict the future. Its purpose is to support forward-looking thinking and inform decision making.
As part of this work, Policy Horizons has done a literature review, researched ongoing development related to the field, engaged with policy analysts and decision makers internal to the government, and held extensive conversations with key AI experts.
The 16 insights captured in this report explore future possible capabilities of AI, longer-term risks and opportunities, and uncertainties related to policy-relevant assumptions. Readers can seek to understand the impacts AI could have on governance, society, and the economy. When engaging with this report, readers are invited to ask:
- How will future advancements in hardware, software, and interfaces create new opportunities and risks for Canada and its allies
- Where could AI bring the biggest and most unexpected disruptions to governance, society, and markets
- What assumptions about AI’s development and deployment in the future may need to be challenged or further explored before they form the basis for decision making
The 16 insights are synthesised in 16 insights about factors shaping the future of and with AI and expanded upon further in the document.
Defining AI
There are many ways to define artificial intelligence, along with much debate about whether or not the term should even continue to be used.Footnote 1, Footnote 2, Footnote 3 For the purpose of this work, Policy Horizons Canada uses the Organization for Economic Cooperation and Development’s (OECD) definition of an AI system as “…a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”Footnote 4
16 insights about factors shaping the future of and with AI
- AI could break the internet as we currently know it
Emerging AI tools have the potential to undermine the advertising business model that has served as the foundation for the internet for much of the last 20 years. The internet in the age of AI could be very different, one where people have more agency and control, but which is also less useful and secure.
- AI could empower non-state actors and overwhelm security organizations
In the future, more accessible and versatile AI will have implications for security. Non-state actors — friendly and hostile — will have access to capabilities traditionally held only by states. They might be able to deploy them faster than states, keeping security organizations in a constant race to catch up.
- Lack of trust in AI could impede its adoption
How trust in AI will evolve is unknown. Frequent or unaddressed failures in AI systems — or one significant failure — could erode trust and impede adoption, jeopardizing entire industries. Emerging forms of certification, verification, and efforts to rectify harms could encourage user trust and uptake.
- Bias in AI systems could remain forever
Bias is a feature of both human and AI decision making. As the data used to train AI is often biased in hard-to-fix ways, growing reliance on AI in decision-making systems could spread bias and lead to significant harm. Bias may never be eliminated, in part due to conflicting perspectives on fairness.
- Using AI to predict human behaviour may not work
While AI sometimes makes impressive predictions about human behaviour, many are inaccurate. Basing decisions on these predictions can have dire consequences for people. It might be impossible to improve the technology to a level where its benefits outweigh the costs.
- AI could become “lighter” and run on commonly held devices
Rather than a few large AI models running on cloud-based supercomputers, future AI models could be diverse and customized, some of them running on small, local devices such as smartphones. This could make regulation and control more complicated and multiply cybersecurity risks.
- AI-driven smart environments everywhere
Many products could be sold with AI as a default, creating ‘smart’ environments that can learn and evolve to adapt to the needs of owners and users. It may be difficult for people to understand the capabilities of smart environments, or to opt out of them.
- AI further erodes privacy
As AI-enabled devices collect more data online and in real life, efforts to turn data into new revenue streams could butt up against more privacy-conscious attitudes and devices. A new status quo may emerge that looks very different from the opaque way that users today exchange their data for free services.
- Data collected about children could reshape their lives in the present and future
Jurisdictions are expressing concerns about children’s privacy as AI technologies become more ubiquitous. Pervasive data collection in childhood could offer new opportunities for accessibility and education, but also worsen existing vulnerabilities, erode privacy, and reshape adult lives in the future.
- AI could reshape our ways of relating to others
AI tools could mediate more social interactions — in public or professional settings, or in private with friends, family or romantic partners. These tools could be used to flag suspicious or harmful behaviour, and help avoid social blunders — but they could also assist in manipulating and preying on others.
- AI could delay the green transition
AI uptake is driving up demand for energy and water globally. This could potentially delay the green transition, though Canada could benefit from an increased demand for greener data centres.
- AI could become more reliable and transparent
In the future, AI could have improved reasoning skills, allowing it to produce better analyses, make fewer factual errors, and be more transparent. However, these improvements may not be enough to overcome problems related to bad data.
- AI agents could act as a personal assistant with minimal guidance
In the future, people could have a general-purpose AI agent acting as a personal assistant, capable of performing multi-step tasks for its user, 24/7. Impacts could include improved access to task automation, greater productivity, disruption of advertising-based business models, and unforeseen harms.
- AI in Assessments and Evaluations
AI is disrupting established assessment and evaluation processes, such as job screenings, grant application evaluations, peer review, and grading. Screening processes could see a significant increase in applicants, and new forms of evaluation could emerge that focus less on written work as a measure of competence.
- AI and neurotechnology
AI-powered neurotechnologies are allowing people to monitor and manipulate the activities of their brain and nervous system. Further developments could bring major advances in health and wellness, but also raise significant privacy, ethical, and social concerns.
- AI could accelerate the development and deployment of robots
Improving AI and falling costs are allowing “service robots” to proliferate outside of industrial contexts. AI companies are developing humanoid robots with a wide range of cognitive and physical capabilities, bringing change to white- and blue-collar jobs.
Insight 1: AI could break the internet as we currently know it
Alternative format
(PDF format, 8.4 KB, 5 pages)Date published: 2025
Cat.: PH4-211/2025E-PDF
ISBN: 978-0-660-76881-6
Emerging AI tools have the potential to undermine the advertising business model that has served as the foundation for the internet for much of the last 20 years. The internet in the age of AI could be very different, one where people have more agency and control, but which is also less useful and secure.
Today
The internet is an integral part of Canadians’ everyday lives. Young people, in particular, rely on it as a source of friendshipsFootnote 5 and information.Footnote 6 Across the wider population, 95% of Canadians over the age of 15 use the internetFootnote 7 and 75% engage in online bankingFootnote 8 and shopping.Footnote 9 Nearly half of all households have internet-enabled smart devices.Footnote 10
While more websites exist than ever before, most people’s experience of the internet is dominated by a small handful of massive companies. As an online joke puts it, “the internet is 5 giant websites showing screenshots and text from the other 4.”Footnote 11 Today, 65% of all internet traffic is to domains owned by Alphabet, Meta, Netflix, Microsoft, Tik Tok, Apple, Amazon, or Disney.Footnote 12 Google accounts for 91% of all internet searches.Footnote 13
Advertising funds the provision of free online services and the online creator economy.Footnote 14 Alphabet, Meta, Apple, Microsoft, and Amazon each earn billions from online advertising.Footnote 15 Companies invested US$46.7 billion in 2021 in optimizing their website design to rank more favourably on search engines, get more traffic, and generate more ad revenue.Footnote 16
AI-generated content is rapidly becoming more realistic and human-like. Until recently, most online content was human-generated as computer-generated content was of generally low quality. This began to change in 2022 with the release of Dall-E 2, Midjourney, and ChatGPT. Large language models (LLMs) can produce high-quality human-like text. AI image generators can produce photorealistic images. AI video generators are advanced enough to interest Hollywood.Footnote 17 Voice generators have made popular AI song covers.Footnote 18 While most AI-generated content can still be identified through subtle telltale signs, it is becoming harder to distinguish from human-made content.
AI is not yet playing a significant role in undermining cybersecurity, but incidents are increasing. 70% of Canadians reported a cybersecurity incident in 2022, up from 58% in 2020. Although these are still mostly unsophisticated spam and phishing attempts,Footnote 19 fraud cases involving deepfakes increased 477% in 2022.Footnote 20 Scammers have started to make fake ransom calls using AI-generated voices of the target’s loved ones.Footnote 21 Deepfake-related institutional fraud cases are also emerging, leading to millions of dollars in potential losses for firms and governments.Footnote 22
Futures
AI-powered agents and search engines could transform how people interact with the internet. Instead of users going to specific websites, AI tools could create custom, personalized interfaces that are populated with content from across the internet. They could also help users find niche content and communities beyond major social media platforms.
These tools could disrupt internet ad-based business models. Should a significant portion of web traffic be made up of AI bots pulling information for their users, websites and search engines may earn less revenue from showing ads. They may need to find other ways of generating money, such as introducing subscriptions, paywalls, or the direct monetization of user data.
The internet could become dominated by AI-generated content, which may be indistinguishable from human-generated content. Online platforms could create multimedia content tailored to individual users. The internet could be awash with AI-generated websites filled with spam, misinformation, bots, and fake product reviews. It could become difficult for users to differentiate quality content from junk. If AI factchecking does not improve, this could become even more challenging.
The general sense of trust and security that Canadians feel online could be greatly diminished. When video calls can be convincingly deepfaked, it could be challenging for a person to know if a new online friend is a real person or an AI phishing scam. AI-powered disinformation campaigns could become more sophisticated, further undermining trust in institutions. As AI tools become more accessible and powerful, anyone with even the tiniest online presence could be exposed to a growing risk of harm.
Implications
- AI Search engines may be held accountable for results that are displayed to users. This could have legal repercussions and damage trust particularly if results are erroneous and even dangerous
- Content and services that were once free may be put behind paywalls as AI tools undermine the online advertising business model
- Websites may attempt to directly monetize user data and content, for example by licensing it as training material to AI companies
- Sponsored content, product placement, and other forms of advertising may become more common
- While human-generated content is unlikely to disappear, content creators may struggle to compete with cheap and tailored AI-generated content
- Content creators may feel more pressure to monetize their audiences
- Human taste and curation could become highly valued. Content creators may give way to content curators, who amass followings based on their curation of online content
- Unique, personalized content could lead people to feel isolated with fewer cultural touchpoints
- AI tools may shift control of the design, layout, and experience of a website from web designers to users. This could make it easier for users to avoid the addictive or manipulative designs known as “dark patterns”
- Navigating the internet without AI tools could become very difficult
- Websites may cease to exist as they are currently known, instead becoming repositories of data to be scraped by AI. Businesses may no longer need web designers
- Distrust could be the prevailing attitude online as cybersecurity risks increase and AI-generated content dominates the internet
- AI phishing schemes could become more sophisticated
- People may become more selective with what information they share online
- New authentication measures may emerge in attempts to restore trust online
- If trust in AI continues to decline (see Insight 3), people may fear they are being manipulated by AI-tailored feeds of content
- If search engines cannot effectively sort quality content from AI spam, they may no longer be effective go-to sources for all queries
- People may rely on a few trusted sources for information online
- Existing models of e-commerce may be disrupted in unforeseen ways
Insight 2: AI could empower non-state actors and overwhelm security organizations
Alternative format
(PDF format, 3.6 KB, 4 pages)Date published: 2025
Cat.: PH4-212/2025E-PDF
ISBN: 978-0-660-76883-0
In the future, more accessible and versatile AI will have implications for security. Non-state actors — friendly and hostile will have access to capabilities traditionally held only by states. They might be able to deploy them faster than states, keeping security organizations in a constant race to catch up.
Today
AI is lowering barriers to entry and reducing the cost of conducting attacks.Footnote 23 For example, AI can help someone with limited programming skills write malicious software.Footnote 24 Leading open-source AI models only have marginally less capabilities than what is currently considered the most powerful general-purpose AI, GTP-4 Turbo.Footnote 25 Their general-purpose nature makes them equally useful to all types of problems, including harmful activities. It is uncertain who will use AI more effectively and quickly — government bodies among rival nations or non-state actors.Footnote 26 However, AI could empower non-state threat actors, corporations, or nations who are not bound by legal or ethical constraints, and willing to apply the technology in ways other states cannot.
Futures
Many new actors could access large-scale monitoring in the future. AI’s ability to analyze large amounts of open-source data could provide new actors with the ability to track and predict the movement of police and military forces.Footnote 27 AI tools can help write malicious computer code, making cyber defence more difficult. Similarly, ChatGPT has been used to create evolving malware, malicious software that can change its original code to evade cyber defences.Footnote 28
AI can also be used in unconventional attacks, to lower the cost of inflicting physical harm or attacking infrastructure. For example, AI can facilitate the process of 3D printing dangerous parts like those needed to make nuclear weapons.Footnote 29 AI could also be used to automate swarms of low-cost drones to overwhelm air defences,Footnote 30 providing an advantage to smaller actors who wish to target urban settings or confront modern militaries. Should AI greatly increase access and automate harm, this could increase pressure on the security sector and change how it keeps citizens safe.
Implications
- Open-source AI could empower non-state threat actors with new tools and erode advantages traditionally held by states, such as surveillance and monitoring Footnote 31
- There may be constraints of the ability of law enforcement agencies to gather intelligence as compared to non-state actors
- More communities could challenge the use of AI by law enforcement agencies
- Innovative use of AI could surpass the ability of defence and security organizations to adapt. Failures in public safety could weaken institutional trust or change public attitudes on appropriate government use of AI Footnote 32
- Private AI firms could become the main players in the cybersecurity and intelligence sectors, including in spaces traditionally seen as within the public domain
Insight 3: Lack of trust in AI could impede its adoption
Alternative format
(PDF format, 3.4 KB, 6 pages)Date published: 2025
Cat.: PH4-213/2025E-PDF
ISBN: 978-0-660-76885-4
How trust in AI will evolve is complicated and unknown. Frequent or unaddressed failures in AI systems — or one significant failure — could erode trust and impede adoption, jeopardizing businesses that depend on AI. Emerging forms of certification, verification, and efforts to rectify harms could encourage user trust and uptake.
Today
Trust is central to acceptance of AI and, in Canada, trust in AI is declining.Footnote 33 The CanTrust index shows that Canadians’ trust in AI declined by 6% between 2018 and 2024.Footnote 34 The global IPSOS AI Monitor shows that the Anglosphere, including Canada, has less trust in AI than other regions: for example, 63% of Canadians are nervous about products and services that use AI compared to only 25% of people in Japan.Footnote 35
Trust in AI depends on the context in which it is used. For example, trust is highest for simple tasks such as adjusting a thermostat, and lower for tasks connected to personal safety such as self-driving cars.Footnote 36 Public trust in self-driving cars is low and falling. In 2023, only 22% of Canadians reported trusting self-driving cars and other AI-based driverless transportationFootnote 37 — compared to 37% of Americans, which is down from 39% in 2022 and 41% in 2021.Footnote 38
Despite declining trust, use of AI tools in Canada is growing. A 2024 Leger poll found that 30% of Canadians now use AI, up from 25% a year ago. Younger demographics are using AI more than older demographics — 50% of those 18-35 report using AI, compared to only 13% of those 55 and older.Footnote 39
Risks and failures arising from AI technologies have captured public attention frequently over the past year. In some instances, finetuning and testing of many AI tools was done after public roll out, which stands in stark contrast to trials for clinical drugs which require long periods of testing before release to the public. New initiatives to capture and report on AI incidents have emerged, such as the AI Incident Database and the OECD AI Incidents Monitor.Footnote 40, Footnote 41
Adoption of AI can feel forced, rather than chosen through personal agency. The current push to integrate AI everywhere can mean that valid concerns around data security, fairness, environmental consequences, and job security are downplayed.Footnote 42 Forcing people to adopt AI in their everyday lives without also making efforts to make the technology more trustworthy can limit the potential transformational impacts of the technology.Footnote 43 The current backlash against the increasing use of AI facial recognition technology in airports is one example of the interplay between forced adaptation in the absence of trust.Footnote 44
Futures
Improvements in technology, practices and systems could help to build trust in AI. For example, new capabilities such as neuro-symbolic AI, which combines neural networks with rules-based symbolic processing, promise to improve the transparency and explainability of AI models. Firms’ adoption of new labelling, certification, or insurance models could offset some of the mistrust in AI.Footnote 45, Footnote 46 And some providers are now developing ways to assess AI models for safety and trustworthiness, offering warranties to verify their performance.Footnote 47, Footnote 48 In the future, AI systems could give a confidence interval for everything from search results to self-driving vehicles, supporting users in weighing the risks and uncertainties involved.Footnote 49
More strategic and thoughtful deployment of AI could enhance trust. In the future, AI will likely become the right solution to some problems but not others. Trust in AI could be enhanced if people perceive that it is making their lives easier,Footnote 50 rather than replacing tasks they enjoy or seeming like a solution in search of a problem. Individual familiarity with AI may build trust in one area of work or life, without necessarily translating to increased levels of trust in the overall AI ecosystem.Footnote 51
High-profile failures and growing appreciation of risks could erode trust. Skepticism and mistrust could grow as the risks of AI become more well known and well documented and as more high impact tasks are delegated to AI. Groups that are negatively impacted by AI are actively opposing its use in some domains, such as writers and artists who are collectively organizing to limit what they see as the destructive power of the technology.Footnote 52 Mistrust could be driven not only by narratives that describe AI as an extinction-level threat, but also by its association with growing inequality.Footnote 53 Similarly, high-profile technological failures could cast shadows of mistrust into the future. For example, public trust and support for nuclear power in Canada declined significantly in the wake of the Fukushima Daiichi nuclear accident in 2011, and public concerns over nuclear safety hindered the sector’s growth for years.Footnote 54 A similar loss of trust in AI technologies such as self-driving cars, could jeopardize not just one company, but entire industries.
Implications
- Lack of trust could be a major impediment to the integration of AI in some sectors
- A single high-profile outlier incident involving an established AI system could disproportionately harm trust in and uptake of AI — for example, a financial crisis triggered by AI-generated content and high-frequency algorithmic trading
- People could trust AI to perform certain tasks more than they trust other humans
- Differing levels of trust in AI across groups or use cases could unite people across typical societal divisions or polarize them in new ways
- Excessive trust in some AI outputs could increase misinformation and disinformation, with consequences for democracy and societal cohesion
- A poor experience with one AI system could lead to distrust in other AI systems, while a positive experience with one AI tool could lead to increased trust in other AI applications
- Case law and legislation that determines accountability for decisions taken by or with AI could influence trust and adoption
- The emergence of new labels and certifications could affect consumer confidence in AI, such as warning labels, or those analogous to fair trade or organic produce labelsFootnote 55
- Accountability and responsibility regimes will be clarified, and many systems will need to determine who is accountable for the failures of AI
Insight 4: Bias in AI systems could remain forever
Alternative format
(PDF format, 5.7 KB, 5 pages)Date published: 2025
Cat.: PH4-214/2025E-PDF
ISBN: 978-0-660-76887-8
Bias is a feature of both human and AI decision making. As the data used to train AI is often biased in hard-to-fix ways, growing reliance on AI in decision making systems could spread bias and lead to significant harm. Bias may never be eliminated, in part due to conflicting perspectives on fairness.
Today
Bias in AI is seen as a major issue capable of automating discrimination at scale in ways that can be difficult to identify. While human decisions are also biased, one of the major risks of automating high-stakes decisions is that these become more widespread and less detectable, increasing the possibility of systemic errors and harms. While a single biased manager could decide to give higher interview scores to the few job applicants that look and speak like them, a biased AI model could have a similar effect on potentially thousands of people across organizations, sectors, or countries.
Many AI products claim to be less biased than human decision makers but independent investigations have revealed systematic failures and rejections. Footnote 56 For example, an audit of 2 AI hiring tools found that the personality types it predicted varied depending on whether an applicant submitted their CV in Word or raw text.Footnote 57 Similar tools have discriminated against womenFootnote 58 or people with disabilities.Footnote 59 Bias is embedded in AI in many parts of its lifecycle — training data, algorithmic development, user interaction, and feedback.Footnote 60
Bias may be impossible to eliminate because the data used for training AI models is itself often biased in ways that cannot easily be fixed. Controlling results can also cause problems. For example, an AI model that learns to discard racially sensitive wording might omit important information about the Holocaust or slavery.Footnote 61 Further, algorithms often cannot compute different notions of fairness at the same time, leading to constantly different results for certain groups.Footnote 62, Footnote 63, Footnote 64
Futures
In a future where bias can never be eliminated — whether human or algorithmic — societies may need to rethink current ideas about fairness and how to best achieve it. People do not necessarily agree on the meaning of “fair.” For example, some consider affirmative action to be fair while others do not. Institutions could adopt standards intended to distribute resources — jobs, grants, awards, or other goods — in ways that explicitly attempt to repair historical injustices. Organizations seeking to avoid systemic bias may use an “algorithmic pluralism” approach, which involves various elements in the decision-making process and ensures no algorithms severely limit opportunity. Footnote 65
Efforts could be made to reduce bias in AI systems to an acceptable level, though eliminating it entirely could be impossible. Pushback may continue against using AI technologies in certain sensitive domains, such as policing or hiring. Alternatively, these technologies could continue to improve and become less biased in the future. Either way, there will likely be a continued push to reducing bias in AI technologies.
Implications
- Systemic harms or failures could become institutionalized in contexts where single algorithms are allowed to make bulk decisions about people’s access to certain resources (e.g. jobs, loans, visas)
- Human biases could become greater among those who use AI systems, as people learn from and replicate skewed AI perspectives, carrying bias with them beyond their interactions
- Disagreements about the best ways to code for algorithmic fairness may result from different definitions of what fairness actually means. This could lead to completely different results for similar technologies or systems
- The inability to eliminate bias from algorithms could ultimately lead to political, social, or economic divisions
- If decisions become more distributed, including various algorithms and humans at different points in a process, it could be difficult to make discrimination claims or identify a responsible party for discrimination
- High-profile cases of algorithmic discrimination could lead to loss of trust in AI decision-making systems, particularly in policing and healthcare, and an increase in litigation
Insight 5: Using AI to predict human behaviour may not work
Alternative format
(PDF format, 6.6 KB, 5 pages)Date published: 2025
Cat.: PH4-215/2025E-PDF
ISBN: 978-0-660-76889-2
While AI sometimes makes impressive predictions about human behaviour, many are inaccurate. Basing decisions on these predictions can have dire consequences for people. It might be impossible to improve the technology to a level where its benefits outweigh the costs.
Today
More governments and institutions are using AI to predict human behaviour and make decisions about individuals. For example, more than 500 schools in the U.S. use an AI model called Navigate to predict student success.Footnote 66 Social workers in the U.S. have used AI to predict which child welfare calls need further investigation.Footnote 67 Both are examples of “predictive optimization”.Footnote 68 Notable AI engineers have argued that predictive optimization algorithms are based on faulty science, with AI predictions being only slightly more accurate than the random flip of a coin.Footnote 69 Despite this, they continue to be used because they outsource complex work like developing decision-making rules (e.g. what criteria to investigate for fraudulent behaviour or how to decide if a child is at risk of abuse). Human-generated decision-making rules can appear subjective and inaccurate compared to those of predictive AI models, which claim to reflect objective patterns in the real world.
Box #1:
Predictive optimization
The use of AI to predict future outcomes based on historical data, to make decisions about individuals.
Predictive models are not always right. Predictive AI models are plagued by many issues, including errors due to a mismatch between training data and deployment data. Because predictive AI must be trained on past data, it cannot account for emergent and complex variables in the world and in individual human behaviours. Models may be unable to account for new and unexpected drivers. Moreover, AI cannot filter out the effects of racist real-world practices such as disproportionate policing in Black neighbourhoods or communities, which leads to increased false arrests.Footnote 70 This has led to inaccurate predictions for vulnerable people.Footnote 71
Predictive AI models cannot understand why real-world behaviour differs from their predictions. Models may assume that individuals will act rationally and consistently or follow the same rules and patterns of humans in aggregate. Models may not address the structural factors that account for differences between predicted and real-world behaviours. A focus on prediction may hinder the discovery of processes that can lead to new behaviours, such as when simplifying the language used on court summons reduced the rate of people failing to appear in court.Footnote 72
While sometimes justified based on cost savings, some governments have felt significant repercussions after using of predictive optimization models. For example, in 2021, the Dutch government resigned over a scandal involving the tax authority’s adoption of a self-learning AI to predict childcare benefits fraud.Footnote 73 The AI erroneously identified tens of thousands of families as owing excessive debts to the tax authority. Over 3,000 children were removed from their homes and many families remain separated. The scandal had significant repercussions, with families forced into debt, losing their homes, and some victims dying by suicide.
Futures
In the future, predictive optimization may be used in some jurisdictions but not others. It could be forbidden within some jurisdictions, particularly where governments have faced high costs and scrutiny due to failures. That could still allow the private sector to expand its currently opaque uses of predictive optimization.Footnote 74 Other jurisdictions may continue to use predictive optimization algorithms despite the risks. This could be because those affected are less able to pursue justice, or because their governments are not bound by democratic norms. Others may view predictive optimization as an inevitably imperfect tool, but one whose use can be justified due to cost savings. Institutions — including governments — that take up AI for predictive optimization and find that the costs outweigh the benefits could keep systems in operation far longer than they should or want to, due to the high amounts already invested or the difficulties involved in undoing a rollout. Some may see predictive AI as ethically unacceptable for decision-making, and instead work on interventions to minimize the predicted negative outcomes.
Implications
- Governments and companies that use predictive optimization without being transparent about the AI’s decision-making rules could be seen as untrustworthy
- If institutions use AI for predictive optimization while the burden of proof to contest inaccurate predictions is put on affected individuals, already vulnerable populations may face worsened outcomes. This could create new bureaucratic bottlenecks and tie up courts with algorithmic harms litigation, including cases related to human rights or Charter violations
- Attempts to sacrifice individual rights for collective gains may benefit privileged populations at the expense of the vulnerable, creating greater socio-economic divisions
- The uptake of predictive optimization models could create initial cost savings that quickly give way to new costs: to fight litigation from inaccurate predictions; to recontract providers to retrain and retune models; and to create new pathways for complaints and compensation for damages
- If AI decision-making pre-emptively punishes people based on biased assumptions, it could decrease the individual agency of vulnerable populations and place new obstacles in their life courses
Insight 6: AI could become “lighter” and run on commonly held devices
Alternative format
(PDF format, 3.1 KB, 4 pages)Date published: 2025
Cat.: PH4-216/2025E-PDF
ISBN: 978-0-660-76891-5
Rather than a few large AI models running on cloud-based supercomputers, future AI models could be diverse and customized, some of them running on small, local devices such as smartphones. This could make regulation and control more complicated and multiply cybersecurity risks.
Today
Improvements in AI training and compression techniques are allowing smaller, less resource-intensive AI models to become more capable. The size of an AI model is often used as a shorthand for its power, capability, and quality. While the largest models are often the most powerful and capable, AI developers are releasing smaller, compressed versions derived from larger models. This allows the smaller model to retain most of the performance of the larger model while also allowing it to be much smaller, less energy demanding, and run on less powerful hardware.Footnote 75 This has led smaller, newer models to outperform older and larger models. For example, Phi-3, which was released in early 2024 and has only 3.8 billion parameters, has comparable performance to GPT-3.5, which was released in late 2022 with 175 billion parameters.Footnote 76 Companies including MetaFootnote 77 and MistralFootnote 78 have released open-source AI models that rival ChatGPT’s performance but can run on a laptop. Researchers in the field of TinyML are developing AI that is smaller and can run on less powerful devices to enable the “smart” Internet of Things (IoT). For example, the Raspberry Pi, a credit card-sized computer popular with programming and computer engineering enthusiasts, can now run a suite of AI models including facial recognition.Footnote 79
Box #2:
Model size
The size of an AI model is determined by how many parameters it has.
Parameters are variables in an AI system whose values are adjusted during training. Smaller models can have parameters numbering in the millions or fewer, while larger models can have more than 400 billion.
Futures
We may see thousands of different AI models capable of running locally on every type of digital device, from smartphones to tiny computers.Footnote 80 These models could be developed by amateurs, startups, or criminals. They could be based on open-source models and customised for different purposes through training on widely accessible datasets. For example, Venice AI is a web-based AI service, built from a handful of open-source AI models, that allows users to generate text, code, or images with little to no guardrails and is sold as ‘private and permissionless.’Footnote 81 As AI models of different sizes become more widely deployed, this may give rise to an ecosystem of AI models with various degrees of interoperability. Small models could interact with large, cloud-based, publicly accessible models, leveraging their power to perform tasks or learn (see Figure 1). Such small, localized models may lack safety measures and be deployed broadly without the knowledge of any authority.
Figure 1
Figure 1 AI Ecosystem, an example of how AI models could work together
Figure 1 – Text version
AI Ecosystem, an example of how AI models could work together
Figure 1 follows 3 AI models of various size working together along a supply chain. Each AI model has their own section. The first section depicts an AI running on a smartphone. Here a speech bubble says: ‘When will my shipment arrive?’ as someone asks the question to their AI running on a phone. The medium size AI model interprets voice commands and uses phone data to identify the specific shipment. The second section depicts an AI running in the cloud connecting different transportation vehicles. The large size AI model processes requests, studies the supply chain to foresee delays and estimate delivery times. It also seeks local data for analysis. The final section depicts an AI running on a surveillance camera in a warehouse. The small size AI model uses object recognition to find free shelf space and provides local information to the cloud AI.
Implications
- Regulations focused only on large AI models may not be effectiveFootnote 82
- Open-source AI could allow the circulation of models which are problematic, whether because they incorporate bias, lack safety measures, or facilitate illegal activitiesFootnote 83
- It could be hard to track bad actors training or running small but powerful AI models
- By analysing data locally, on-device AI models could help individuals protect their data and privacy
- Small businesses could customise their own AI tools to better meet their needsFootnote 84
- Compatibility between AI-enabled devices could provide users with more options but also create cybersecurity vulnerabilitiesFootnote 85
Insight 7: AI-driven smart environments everywhere
Alternative format
(PDF format, 3.8 KB, 5 pages)Date published: 2025
Cat.: PH4-217/2025E-PDF
ISBN: 978-0-660-76893-9
Many products could be sold with AI as a default, creating “smart” environments that can learn and evolve to adapt to the needs of owners and users. It may be difficult for people to understand the capabilities of smart environments, or to opt out of them.
Today
Autonomous devices and robots are increasingly present in our everyday lives. For example, restaurants are using robots to deliver meals.Footnote 86 Robot cleaners are commonly being used in commercial spaces.Footnote 87 In the agriculture sector, more autonomous and semi-autonomous machinery is being used to cultivate crops. In homes, AI is being added to everyday devices. Figure 2 shows further examples. Such devices could continue to gain new features as more capable AI models are released.Footnote 88
Figure 2
Figure 2 Examples of products incorporating AI features
Figure 2 – Text version
Examples of products incorporating AI features
Figure 2 is a table of different AI enabled products, their descriptions and the AI capabilities within each product. There are 3 product categories, each with 2 example products. The first, household devices, including a smart pillow and Amazon Ring the video doorbell. The second, wearables, including smart glasses and mixed reality glasses. And the third, commercial devices, including Spot the robot dog and a self-driving truck. Each of the 6 products are briefly described. For example, Smart pillow by DeRucci, released in 2024. Monitors and intervenes to adjust the position of the head and reduce snoring and the risk of sleep apnea. The table tracks which of 6 AI capabilities are included in each product. The capabilities are data analysis, computer vision, robotics, language processing, media generation, and navigation.
Researchers and industry may need more data about the physical world to train more advanced AI. AI that collects real-time information on its physical surroundings is referred to as embodied AI (see Figure 3).Footnote 89 AI can be embodied in anything from smart phones to household devices or human-like robots. When connected to sensors and given mobility, AI can interact with people and physical spaces, for example by opening doors or summoning elevators.Footnote 90 As giving AI a body can allow it to learn from interacting with the world much like humans do, it may represent a path toward developing more advanced AI.Footnote 91
Figure 3
Figure 3 From Artificial Intelligence to Ubiquitous Computing
Figure 3 – Text version
From Artificial Intelligence to Ubiquitous Computing
Figure 3 explores how AI in objects is gaining various capabilities from the mundane to the powerful, with some devices operating independently and others collaborating to create a seamless experience. These objects include smartphones, surveillance devices and appliances, as well as connected robots and self-driving cars. These devices in the image are connected via Internet of Things across the urban landscape. The figure provides the following definitions, each contributing to a final concept, Ubiquitous Computing: 1) Edge Devices are located near data sources and end-users, they can process data without sending it to the cloud, they can also serve as a link to a broader network. 2) Internet of Things is a network of devices that communicate, coordinate, collect and exchange data. 3) TinyML, or Tiny Machine Learning, uses lighter AI models that run on small, low-power devices. Tiny ML empowers IoT devices with artificial intelligence. 4) Active Perception is the ability of an AI agent to actively control how it perceives its surroundings, like zooming or changing a camera position or walking over to get a better view. 5) Embodied AI are systems with a physical form that allow it to learn from and adapt to surroundings. These include objects, robots, vehicles and more. 6) Ubiquitous Computing integrates connected microprocessors into everyday objects so that computing is made to appear anytime and everywhere.
It is becoming more difficult to understand the capabilities of devices in our surroundings. Some devices are referred to as ‘robots’ despite having no AI capabilities.Footnote 92 Other devices can have multiple AI functions. For example, tourists can rent AI-powered e-bikes that can give a guided city tour.Footnote 93 Bird watchers can buy AI-powered binoculars that identify wildlife.Footnote 94
Older devices can often be retrofitted with new capabilities in ways that are not obvious from the outside. For example, an AI kit can make an existing tractor fully autonomous.Footnote 95 Security cameras that have been in operation for a long time can be connected to facial recognition software.Footnote 96
Futures
In the future, more AI-powered devices may be found in more settings, from workplaces to leisure spaces and dwellings. It may become impossible to avoid interacting with these devices. The number of IoT (Internet of Things) devices could reach 75 billion by 2025, more than doubling in 4 yearsFootnote 97 and the global AI software market could grow roughly fivefoldFootnote 98 from 2022 to 2027.
Device manufacturers could be incentivized to add AI capabilities to more devices either as a selling feature or to collect data. Data can be useful not only to generate new revenue streams but also to train new models. This could be especially relevant if embodied AI proves useful in building next-generation frontier AI models, or if companies reach the limits of existing quality training data.Footnote 99 For example, by deploying a fleet of smart cars a company could use data on the city landscape, traffic, and the behaviour of pedestrians to train even more powerful AI models.
Everyday devices could end up having more powerful AI capabilities than needed. It may be easier to equip a device with an off-the-shelf, general-purpose AI, such as ChatGPT or Copilot, than to customize a model with more targeted functionality. Smart devices could become the default in new homes, ready to adapt to new owners or tenants. Devices could be sold with certain features locked behind a pay-for-access model, as was seen with the Amazon Ring,Footnote 100 and with Tesla,Footnote 101 and MercedesFootnote 102 cars.
General-purpose AI could become standard in a way that increasingly blurs the lines between consumer product categories. For example, smart watches and fitness trackers have raised concerns that they might occupy a regulatory grey zone between medical devices and low-stakes consumer products.Footnote 103 The Aqara home sensor can be used for everything from controlling lights to providing security surveillance or detecting falls.Footnote 104 The appearance of such objects may not clearly signal their capabilities. Human-like robots may have eyes that can see through walls, for example – or the same sensors could be entirely hidden.
Implications
- People could require new skills to navigate AI-powered spaces. Manufacturers may need to use new kinds of labelling or instructions to disclose the capabilities of their AI devices in a way that allows consumers to make informed decisions
- People unwilling or unable to engage with AI-powered spaces may find themselves unable to access certain services
- Insurance companies could encourage some kinds of AI monitoring or demand it as a condition of coverage.Footnote 105 For example, facial recognition to confirm the identity of a driver to reduce auto theft
- The rights and interests of individuals could come into conflict in new ways. For example, wearing smart glasses in public spaces or sending a robot to pick up groceries could challenge privacy rights. Trust is needed to ensure that the devices are not collecting the likeness of people without consent.Footnote 106 Property owners could install AI-powered devices to protect their investment or help with maintenance. Tenants may find themselves in a smart home with services they do not want or settings they cannot change
- Smart environments could change advertising strategies. It could become routine for AI-enabled devices to nudge users with personalized advertisements in real-time. For example, smart cars may reroute drivers towards certain businesses and encourage them to stop to make a purchase
Insight 8: AI further erodes privacy
Alternative format
(PDF format, 4.9 KB, 6 pages)Date published: 2025
Cat.: PH4-218/2025E-PDF
ISBN: 978-0-660-76895-3
As AI-enabled devices collect more data online and in real life, efforts to turn data into new revenue streams could butt up against more privacy-conscious attitudes and devices. A new status quo may emerge that looks very different from the opaque way that users today exchange their data for free services.
Today
Advances in AI are exacerbating privacy issues with technology. Most Canadians have become accustomed to accessing free online services – such as social media sites, generative AI platforms, or mobile apps. In many cases, they unknowingly give consent to companies to collect their data, sell it to third parties, and use AI to make sense of it and draw inferences about them. For example, Facebook uses AI to make inferences about users’ suicide risk based on their social media posts.Footnote 107 Improvements in AI are allowing firms to analyze a greater variety and amount of data and transform it into revenue streams in new ways.
Not only online environments but also physical spaces are becoming less private. As noted in Insight 7, everyday household objects are outfitted with sensors to collect data – from toilets to toothbrushes to toys. Virtual Reality (VR) and video games can collect data about users’ behaviour in the home and use AI to make inferences about emotions and personality traits.Footnote 108 Outside the home, devices such as smart glassesFootnote 109 and AI pinsFootnote 110 are raising new questions about privacy in public. Fragments of human DNA, known as environmental DNA, collected from public spaces for purposes such as disease monitoring, can potentially be used to track individuals, illegally harvest genomes, and engage in hidden forms of genetic surveillance and analysis.Footnote 111, Footnote 112
Smart cars raise particular privacy concerns. In 2023 the Mozilla Foundation investigated 25 car brands and found that every one collected personal data that is not necessary to operate the vehicleFootnote 113, usually taken from mobile devices connected to cars via apps, this data can include a person’s annual income, immigration status, race, genetic information, sexual activity, photos, calendar, and to-do list. Of the 25 brands, 22 use this data to make inferences – for example, from location and phone contacts – and 21 share or sell data. Thirteen collect information about the weather, road surface conditions, traffic signs, and “other surroundings”, which can include passersby.Footnote 114 Ninety-five percent of new vehicles will be connected vehicles by 2030.Footnote 115
Futures
As the Internet of Things becomes the “AI of Things”, data may become even more valuable, further incentivizing ever more data extraction. It may become possible to draw more sophisticated inferences to predict human behaviour, movement, or identify individuals, as discussed in Insight 5.
However, international regulatory pushback could reshape the privacy landscape. More jurisdictions are passing and enforcing new data privacy lawsFootnote 116, such as the American Privacy Rights ActFootnote 117 in the US. This could change some, if not many, aspects of “surveillance capitalism”Footnote 118 by giving users more control over their data. Future legal reforms could reframe inferences as personal informationFootnote 119, making it more difficult to sell them to third parties.
Emerging technology could also shift the privacy balance. Edge computing, which refers to networks or devices that are physically near to the user, could enhance data privacy and security.Footnote 120 When user data is stored and processed on a user-owned device, it may be more difficult for companies to collect and sell it.Footnote 121 However, edge computing can also introduce new risks, such as enabling face recognition on local devices and potentially easier access for malicious actors.Footnote 122
Implications
- Distinctions such as public versus private and online versus offline could become increasingly blurred. Homes and other spaces could be experienced as more or less private depending on the use of devices. Visitors to homes and passengers in cars may demand new consent protocols to protect their privacy
- Data shared with third parties could lead to sensitive information being shared with insurance companiesFootnote 123
- Schools and childcare providers could use privacy protections as a competitive advantage to attract families
- Surveillance could change the practices of police and criminals
- Some forms of crime could move further underground and become more organized to evade detection
- New technological capabilities could create new opportunities for hacking, fraud, and stalking
- Traffic police may be less needed as monitoring of drivers by governments and insurance companies enables tickets to be issued automatically
- ActivistsFootnote 124 and journalistsFootnote 125 could increasingly use ubiquitous computing to “return the gaze” by collecting information about powerful organizations or individuals. This practice is known as “sousveillance” or “equivalence.” This could include hacking sensitive information about the personal lives of political representatives or other public figuresFootnote 126
- Data protection regimes could become more complex and less aligned globally
- Jurisdictions could struggle to balance privacy with researchers’ need for representative datasets in areas such as medicineFootnote 127
- Jurisdictions with weaker privacy laws could become increasingly “risky” destinations for work or travel
- Privacy-protecting devices could tilt the balance of power toward users
Insight 9: Data collected about children could reshape their lives in the present and future
Alternative format
(PDF format, 6.3 KB, 6 pages)Date published: 2025
Cat.: PH4-219/2025E-PDF
ISBN: 978-0-660-76899-1
Jurisdictions are expressing concerns about children’s privacy as AI technologies become more ubiquitous. Pervasive data collection in childhood could offer new opportunities for accessibility and education, but also worsen existing vulnerabilities, erode privacy, and reshape adult lives in the future.
Today
Young people are a particularly vulnerable group with regard to data privacy.Footnote 128 Their sense of self and ability to make decisions are still developing. Healthy child development involves the ability to experiment and make mistakes without severe and lasting consequences. More jurisdictions are exploring how to protect children’s data and privacy rights and address challenges around meaningful consent.Footnote 129
Cases are growing of malicious actors using children’s data in ways that impact on their mental health and wellbeing. Critics point to how major tech companies capture attention and revenue through addictive user experiencesFootnote 130 and dark patterns.Footnote 131 Dark patterns are a type of web or app design that can be used to influence your decision making when you are using an app or navigating through a website – for example, by intentionally making it difficult to cancel a service.Footnote 132 Social media algorithms exacerbate issues related to negative body image.Footnote 133 Generative AI has been implicated in the circulationFootnote 134 and developmentFootnote 135 of child sexual abuse materials, both real and AI-generated, by adults and children.
Parents have extraordinary scope to both gain and offer visibility into their children’s private lives. For example, keylogger apps can let parents see not only messages a child sent, but also messages they typed but decided not to send.Footnote 136 Parents can also share their children’s private information with others. Some manage revenue-generating child influencer accounts that routinely share personal information and images of their children. These accounts are sometimes openly followed by pedophiles, who benefit from platform policies that reward engagement.Footnote 137
Unwanted “data” shadows could follow children into adulthood. Whether data is shared by parents or collected by platforms or devices,Footnote 138 it may create a “data shadow” that follows children throughout their lives.Footnote 139 This data shadow can begin before birth – for example, when parents use DNA testing services to learn about their children’s genetic susceptibility to diseases.Footnote 140 As these vast troves of data can be stored indefinitely, future AI systems could draw on them to make new inferences about individuals as they grow into adults.
Schools are collecting ever more information about students, using third-party software and AI analysis tools. Since the pandemic, use of student management apps has grown exponentially in daycares, elementary schools, and high schools in Canada. For example, an estimated 70% of elementary schools use Class Dojo. Its privacy policy states it may share data with third-party service providers including Facebook and Google.Footnote 141
Data breaches have affected children and youth in Canada and beyond. In 2024, school photos of 160 students in Alberta were stolen when hackers accessed the cloud storage provider of a school yearbook company.Footnote 142 Ransomware gangs have targeted US public schools, releasing sensitive student data on mental health, sexual assaults, and discrimination complaints.Footnote 143 In 2023, ransomware attacks affected institutions such as Toronto’s Sick Kids Hospital;Footnote 144 Family and Children’s Services of Lanark, Leeds and Grenville;Footnote 145 and Ontario’s Better Outcomes Registry & Network, in which 3.4 million health records were breached.Footnote 146
Corporations and NGOs hold a large amount of sensitive data about youth, which can be vulnerable to breach. Private parental control apps have been breached, exposing monitored children’s data.Footnote 147 In 2023, TikTokFootnote 148, MicrosoftFootnote 149, and AmazonFootnote 150 were fined for children’s privacy violations in various jurisdictions. As a non-governmental organization, Kids Help Phone – which holds the largest repository of youth mental health data in CanadaFootnote 151 – reports conducting a privacy impact assessmentFootnote 152 and aggregating and anonymizing its data.Footnote 153
Futures
The opaque sale, circulation, and analysis of children’s data will become more common, begin much earlier in life, and be put to unforeseen uses in the future. The number of data-collecting devices children interact with – at home, school, and beyond – will increase (see Insight 7 and Insight 8). Some of these devices could be more vulnerable to breaches of sensitive information.Footnote 154
AI-powered monitoring technologies could become more important, but could also be subverted. Parents could look to AI-powered monitoring technologies to help control their children’s online activities and gatekeep increasingly complex informational and media environments.Footnote 155 However, young people could also develop increasingly sophisticated means of evading parental control.
Children and youth could inhabit more highly personalized media environments. Entertainment content and advertising could increasingly be generated or curated by personalized AI companions. Sub- and fan cultures could become increasingly personalized and politicized. Feelings of social isolation could become more prevalent, as well as reduced social cohesion. Some young people may become disillusioned with invasive AI-powered technologies and opt to spend more time offline. However, given the pervasiveness of AI this might not be an option in the future.
The market for youth data may become more competitive as concerns around youth data privacy increase. This could lead tech companies to develop more insidious ways to extract and trade youth data. The age cut-off for being seen as a “child” could differ in different contexts. Data could have to be released when a child attains the age of majority.Footnote 156 Age verification technologies,Footnote 157 like those currently being used in some US states for pornography websites,Footnote 158 could be more widely used to protect youth from predatory adults and adult-only spaces.
Despite the many concerns they raise, new AI-enabled technologies could also collect data in ways that support accessibility.Footnote 159 They could be used to develop individualized learning tools that help students progress at their own pace. They could also enhance the quality of pediatric health care by assisting in diagnosis, patient monitoring, and precision medicine.Footnote 160
Implications
- Today’s children could face more frequent and devastating data breaches throughout their lives
- These breaches could result in forms of identity theft that lead to financial loss or the release of sensitive personal information
- Re-identification of anonymized personal data could become easier as data breaches become more routine and technologies advance – data that seems private today may not be tomorrow
- Lax restrictions could lead to data being used to make AI-mediated inferences about youth that affect their relationships and access to jobs, credit, or insurance in both childhood and adulthood
- Increased use of parental control technologies could lead to undue surveillance and loss of privacy and autonomy for children
- AI could make it more challenging for parents to identify problematic or harmful content, or easier for youth to conceal their engagement with it
- If awareness of issues related to children’s data privacy increases, more developers could be required to launch child-specific apps and platforms that are held to higher privacy standardsFootnote 161 or consider issues such as mental health and addiction
Insight 10: AI could reshape our ways of relating to others
Alternative format
(PDF format, 4.7 KB, 5 pages)Date published: 2025
Cat.: PH4-220/2025E-PDF
ISBN: 978-0-660-76901-1
AI tools could mediate more social interactions—in public or professional settings, or in private with friends, family or romantic partners. These tools could be used to flag suspicious or harmful behaviour, and help avoid social blunders—but they could also assist in manipulating and preying on others.
Today
AI already plays a large role in mediating our relationships with strangers, friends, and family in online spaces. Recommender algorithms act as a social filter, determining which content a user sees, from which people, and in which order.Footnote 162 These algorithms can encourage users to engage with influencers and content creators who provide a high level of apparent access to their lives.Footnote 163 For some users, such “intimacies” can develop into parasocial relationships, where individuals feel emotionally connected or attached to total strangers.Footnote 164
AI devices mediate an increasing number of professional and personal interactions. For example, doctors are already using AI to help diagnose or monitor patients.Footnote 165 People are using AI to help write profilesFootnote 166 or messagesFootnote 167 on dating apps. AI can even analyze and flag the tone that individuals use in messages to one another, for example in apps used to mediate communication in difficult coparenting arrangements.Footnote 168
Wearable devices which introduce AI into new aspects of our lives can blur the lines between real and digital spaces. These devices can use virtual reality (VR), augmented reality (AR), and a combination of AR and VR known as mixed reality (MR).Footnote 169 Research suggests that immersive environments can be more emotionally impactful than traditional online spaces.Footnote 170 Collective experiences in VR can provide a new type of enriching social gathering for geographically distant groups. Harms in VR, such as assault, can have psychologically similar effects as the offline equivalent.Footnote 171
Individuals can develop emotional connections with AI companions. Millions are turning to AI companions to alleviate loneliness, access therapy, get advice, and for romantic connection.Footnote 172, Footnote 173, Footnote 174 When an AI model produces text, speech and images that are indistinguishable from those made by humans, it is easy to anthropomorphise the model by attributing motive and intent to its responses.Footnote 175
Users of popular platforms live in increasingly personalized and private worlds as AI curates the content they see. Social media algorithms often offer users content that suggests they “know” them better than even close friends might. Over time, however, consuming AI-curated content – as opposed to content shared by friends – may warp representations of the self.Footnote 176 As they scroll through content alone, users can enter what researchers call a trance-like state.Footnote 177
AI is changing how parents relate to and engage with their children. AI tools can allow parents an unprecedented level of visibility and control over the apps their children use, the content they consume, and the messages they write, as discussed in Insight 7. Smartphones or trackers can give parents real-time, 24/7 information about their children’s whereabouts.Footnote 178 These tools can erode children’s autonomy, privacy, and independence as they grow and mature. Similar tools used in romantic relationships can facilitate abusive behaviour and stalking.Footnote 179
Futures
In the future, AI could play a larger role in mediating professional interactions, limiting scope for forming new friendships. AI could improve the efficiency of communication between a company’s customers and employees and change workflows between individuals and teams. Workplace culture could become more impersonal, with fewer opportunities for socialising.
AI tools could also mediate more personal social interactions, even in the home among family members. Such tools could include AI agents, platform algorithms, or wearable devices, such as AR glasses. More information about and visibility into the inner lives of people, whether physiological or psychological, could become normalized. This could improve communication in relationships. It could also shift relationship dynamics in new ways, leading to lower trust and autonomy and more mental health issues.Footnote 180
Individuals could increasingly turn to AI for companionship or answers to personal problems. AI could help socially isolated individuals to connect with others.Footnote 181 AI therapists could provide tailored mental health care for populations that lack access: apps such as Black Female Therapist, for example, use AI trained to highlight the importance of systemic racism.Footnote 182 On the other hand, AI companions could further isolate individuals if they replace relationships with humans. Individuals who come to prefer synthetic relationships to real ones could end up disconnected from community, though not necessarily lonely.
Some individuals could seek human connection by sharing and comparing their media feeds. As media experiences become increasingly personalized, there could be increased interest in understanding the distinct worlds that people inhabit. This could include “feed analysis” in therapeutic settings, sharing feeds in the presence of friends, or even public feed-sharing events.Footnote 183
In the future, it may become impossible to distinguish between humans and hyper-realistic AI agents when interacting in online spaces. AI technology could be used to create digital replicas of deceased or estranged loved ones, or celebrities and influencers. AI agents could be perceived as exhibiting human emotions such as empathy and love. Individuals could have what feels like an intimate relationship with a person but is in fact a parasocial interaction with a chatbot. This could entirely replace human social connections for some vulnerable or lonely individuals.
Implications
- AI could help reduce inequalities for those who face language barriers or difficulties navigating complex social interactions
- Relationships with AI companions could feel indistinguishable from human connections, or even easier or better, for some people
- AI companions or therapists could have more influence on an individual’s behaviours than their family or close friends
- Social skills could atrophy. Skills such as listening and empathy could be eroded if users lean too heavily on AI assistance for social interactions or customize AI agents to reflect their needs and preferences
- Marriage rates could decline and loneliness could increase
- The experience of selfhood could change. Earlier and more frequent self-monitoring, and the application of predictive analytics to biological and mental processes, could lead to new ways of understanding and optimizing the self
- New forms of abuse and virtual crime could emerge, potentially challenging definitions of assault and harassment
- Predators could more easily gain the trust of children and adults, leading to greater risk of fraud, harassment, or other abuse
- Using AI tools to communicate with people could shift language over time, potentially towards greater homogenization and sterilization
- AI tools could flag suspicious behaviour, report abuse as it is happening, and help individuals navigate toxic or dangerous relationships
- Bullying and harassment could become more omnipresent and damaging to mental health if it occurs in realistic immersive environments or with the use of generative AI
Insight 11: AI could delay the green transition
Alternative format
(PDF format, 6.7 KB, 5 pages)Date published: 2025
Cat.: PH4-221/2025E-PDF
ISBN: 978-0-660-76903-5
AI uptake is driving up demand for energy and water globally. This could potentially delay the green transition, though Canada could benefit from an increased demand for greener data centres.
Today
AI has climate impacts, though accurately measuring its carbon footprint is a challenge. Generative AI is a particularly energy- and water-intensive technology.Footnote 184 Training a new model consumes energy, as does the use of a model once trained. Google’s greenhouse gas emissions were 48% higher in 2023 than in 2019, due largely to the energy required by AI.Footnote 185 However, as AI companies are not all fully transparent about their energy use or environmental impacts from the development and disposal of hardware, it is hard to be sure about the carbon footprint of AIFootnote 186 Smaller models that run on devices, rather than in the cloud, could have fewer climate impacts (See Insight 6).
Data centres are already straining energy and water supplies. Machine learning models like ChatGPT process user queries in data centres.Footnote 187Footnote 188 Even in 2020 – before the take-off in generative AI – data centres and transmission networks produced 0.6% of total greenhouse gas emissions.Footnote 189 Data centres consume 10 to 50 times more energy per floor space than a typical commercial office building.Footnote 190 The largest in the world can use as much energy as 80,000 US households.Footnote 191 Polluting diesel generators provide backup power to most data centres during power outages.Footnote 192 Data centres also use water for evaporative cooling, and in warmer climates can use millions of gallons per day. With the computational power used by AI doubling roughly every 100 days,Footnote 193 demands on water and energy by data centres are increasing. Data centres in water-stressed regions of the U.S. have come under fire from local residents.Footnote 194 In some areas, plans to close coal-fired power plants have been delayed due to growing electricity demand from data centres.Footnote 195
Canada is an attractive destination for data centres. As AI use has grown, tech companies have sought to locate data centres in countries with cooler climates and clean and cheap power.Footnote 196 With its renewable hydroelectric power, Canada has become an attractive destination for tech companies looking to advertise a reduced carbon footprint.
Futures
Increased AI uptake could hinder the transition toward global climate commitments. As AI is integrated into more devices and processes, its energy and water use could rise steeply. For example, if every online search used ChatGPT, electricity demand would increase by an amount equivalent to adding 1.5 million residents to the European Union.Footnote 197 By 2026, AI could be using more power than the country of Iceland did in 2021.Footnote 198 The International Energy Agency has estimated that data centres’ electricity consumption could double between 2024 and 2026.Footnote 199 The market for GPUs (graphics processing units) used in data centres is projected to grow tenfold from 2022 to 2032.Footnote 200 IT seems poised to increase its carbon footprint in the coming decade, just as other industries are moving in the opposite direction.Footnote 201
Innovations in AI hardware and software could reduce energy use. Nvidia’s upcoming Blackwell GPUs for data centres, for example, promise to be much more efficient – offering up to 30 times the performance while consuming 1/25th of the energy of current chips.Footnote 202 There could also be a shift towards smaller, “lighter”, less energy-intensive computational modelsFootnote 203 (See Insight 6).
Canada could face challenges meeting AI’s demand for cheap, clean hydropower. Canadian utilities might find it challenging to meet the rapid growth in energy demand due to AI. Hydro Quebec anticipates that by 2032, data centres will contribute to an increase of about 2% of the total amount of electricity produced in Quebec in 2022.Footnote 204 Energy shortfalls are being projected in Quebec as early as 2027 and could be made worse by drought and other climate events.Footnote 205 Data centre providers could increasingly be asked to generate their own power and build their own energy infrastructure.Footnote 206
New ways to mitigate the energy demands and impacts of AI could scale up. Some AI companies, including Amazon, Microsoft, and Google, have announced plans to use nuclear energy to reduce their emissions.Footnote 207 In September 2024, Microsoft acquired Pennsylvania’s Three Mile Island nuclear plant, closed since 2019. Microsoft plans to reopen the plant and purchase its entire electric generating capacity over the next 20 years.Footnote 208 Google is expected to have small modular nuclear reactors operational by 2030.Footnote 209 Waste heat from data centres could increasingly be captured and put to other uses, such as to warm adjacent greenhouses.Footnote 210 In the consumer domain, the AI Energy Star project, inspired by similar ratings for home appliances, aims to monitor AI carbon emissions and give the public information that will enable them to choose the least energy intensive AI model for a given task.Footnote 211 Despite these efforts, in a scenario with exponential growth of AI infrastructure it is unclear whether they would be sufficient to mitigate the environmental costs.
Implications
- Use of energy and water by the information technology sector could increase more than is currently being forecast
- Even if AI becomes more energy efficient, its total resource consumption could increase if this lowers costs and leads to AI being embedded in many more devices
- Economic pressure to expand data centres may compete with efforts to transition to green energy
- Public utilities could face increased challenges meeting the growing demand for clean energy. Considerations on the types of projects that are offered clean energy access may shift
- If AI companies increasingly turn to nuclear or other forms of energy to privately fuel the AI boom, this could create new pressures for organizations responsible for regulatory oversightFootnote 212
- Data centres could become increasingly controversial as they put pressure on land, water, and power supplies
- Inequities could emerge as those most impacted by the physical infrastructure of AI may not be the ones who most benefit
- Calls for stricter environmental regulation could grow if an increase in data centres causes more emissions and e-wasteFootnote 213
- Demand for strategic metals and minerals could grow, as data centres compete with green tech such as solar panels and electric batteries
- Platforms that automatically select AI models for a given task based on their performan and energy intensity could become common
- Tech companies may move towards deploying AI locally in their products to reduce use of data centres
- Use of energy and water by the information technology sector could increase more than is currently being forecast
Insight 12: AI could become more reliable and transparent
Alternative format
(PDF format, 4.4 KB, 5 pages)Date published: 2025
Cat.: PH4-222/2025E-PDF
ISBN: 978-0-660-76905-9
In the future, AI could have improved reasoning skills, allowing it to produce better analyses, make fewer factual errors, and be more transparent. However, these improvements may not be enough to overcome problems related to bad data.
Today
AI can generate high-quality text and images but can make logical or factual errors. Neural networks.Footnote 214 are good at recognising patterns without necessarily understanding content or context. For example, a large language model (LLM) uses probability to generate output word by word, based on how often words appear next to other words.Footnote 215 It does not understand what the words mean, so its output can contain hallucinations.Footnote 216 Some developers are seeking to improve factuality and accuracy by giving LLMs access to external knowledge bases through a process called retrieval augmented generation (RAG).Footnote 217 However, RAG relies on the quality of the source data. For instance, Google Search’s AI Overview uses RAG to generate summaries of search queries, but its inability to distinguish authoritative sources from jokes on social media has led it to make recommendations such as putting glue on pizza and looking directly at the sun.Footnote 218
Neural networks lack transparency and can be difficult to understand and control. The calculations they routinely perform are so intricate that even human experts struggle to comprehend how they transform inputs into outputs.Footnote 219 In other words, neural networks lack interpretability. This makes it difficult to ensure that a generative AI system cannot create harmful content, or that a decision-making AI system is not considering prohibited factors such as race or gender (see Insight 5). Some developers try to overcome this by making their system provide an explanation for how it made a decision. However, it cannot currently be known whether an AI model’s explanation accurately reflects its actual weighing of variables in a decision.Footnote 220
Box #3:
Neural network
A type of AI modeled on the human brain, in which interconnected nodes (neurons) process information in layers to recognize patterns, learn from data, and make decisions. LLMs and image generators are a type of neural network.
Hallucination
When a generative AI system presents false or misleading information as true. This can include false claims, made-up sources, responses to content that was not in the prompt, or images depicting things that are impossible in reality.
Retrieval augmented generation
An LLM referencing an authoritative knowledge base, outside of its training data, before generating a response.
Explainable vs interpretable AI
An “explainable AI” can provide a reasonable-sounding explanation for how it generates outputs. An AI is “interpretable” if a human can understand and explain how it works internally.
Futures
Future AI systems could combine different approaches to become more functional and versatile. Hybrid AI systems use more than one type of AI: for example, neuro-symbolic AI combines the pattern recognition of neural networks with the human-interpretable rules of symbolic AI.Footnote 221 In the future, more AI systems could be composed of multiple systems with strengths that make up for each other’s weaknesses. These hybrid systems could be more capable, accurate and high performing. They could lead to the development of entirely new types of AI.
Box #4:
Symbolic AI
An approach that attempts to mimic human reasoning, in which knowledge is represented as symbols and manipulated in accordance with the rules of a formal logic system, such as deduction and induction.
AI systems could be more transparent and interpretable, making them less biased. Better interpretability could make it easier for developers to prevent a decision-making AI system from considering factors it should not, such as race. For example, a bank could show a customer why their AI system denied them credit, enabling the customer to seek recourse if – for example – they think the AI put too much weight on their postal code, which reflects that they live in a diverse part of the city. However, this is unlikely to fully eliminate bias (see Insight 4 and Insight 5), especially as biases may be inherent in the data.
AI could be less likely to hallucinate, although it is only as good as the data, logic, and training it has access to. Hybrid models, with the ability to reason in different ways, could hallucinate less and provide higher quality analysis. For example, they might use RAG to gather information online then use a symbolic AI to evaluate if the information is credible or a joke. Improved reasoning skills will not solve the underlying problem of a polluted information ecosystem, however (see Insight 1). Future AI systems may approach this problem by collecting more data directly via sensors. They might also be smart enough to recognise when they do not have adequate information to provide an accurate answer.
Implications
- Improved transparency and a reduction in bias could allow AI to be used in areas where it would be deemed inappropriate today, such as law enforcement or legal proceedings
- AI tools could make it faster and easier to gather high-quality information and perform analysis, improving decision-making processes and research
- AI research aides could disrupt entry-level jobs like research assistants, junior analysts, or junior lawyers
- AI analysis would likely be best when limited to trusted, high-quality sources, like an organization’s internal documentation or a database of academic journals
- Higher quality analysis and reasoning could speed up adoption of AI among currently risk-averse organizations. It could reduce the need for oversight and quality control, but also make it less likely that mistakes will be noticed
- Hallucinations may become less obvious, making them harder to detect
- Even with improved reasoning, false or biased inputs could undermine AI performance and reliability
- AI could still spread misinformation, where the argumentation and reasoning are valid, but the premises are false
Insight 13: AI agents could act as a personal assistant with minimal guidance
Alternative format
(PDF format, 4.8 KB, 7 pages)Date published: 2025
Cat.: PH4-223/2025E-PDF
ISBN: 978-0-660-76907-3
In the future, people could have a general-purpose AI agent acting as a personal assistant, capable of performing multi-step tasks for its user, 24/7. Impacts could include improved access to task automation, greater productivity, disruption of advertising-based business models, and unforeseen harms.
Today
Virtual assistants currently have limited functionality. Products like Siri or Alexa can only perform tasks that they are specifically programmed to do in response to carefully worded prompts. Although they are promoted as having a wide array of capabilities, they are mostly used to ask for weather forecasts or information on local businesses.Footnote 222 Their ability to integrate with third-party apps depends on the developers of those apps voluntarily incorporating the assistants’ API (application programming interface), a way for different pieces of software to exchange data.Footnote 223 Even when third-party apps and virtual assistants are designed to work together, the assistants often cannot act fully autonomously. For example, when ChatGPT enabled Instacart to create a shopping list after planning a meal, the ChatGPT user still had to step in to actually buy the items.
AI agents are emerging as the next generation of virtual assistants, capable of independent action. These AI agentsFootnote 224 can learn from and interact with their environment in many different ways. For example, companies like AppleFootnote 225 and OpenAIFootnote 226 have developed AI models that can parse and interact with graphical user interfaces. This bypasses the need for app developers to integrate with APIs, and instead allows the AI model to interact with an app in the same way a human would. For example, the user could ask their device to take them home and order pizza, and an AI agent could open a ride-hailing app to summon a ride home, then open a food delivery app and place an order for pizza, all in a single command without the app developer having explicitly enabled such integrations.
Box #5:
An AI agent is an AI program that can be given a goal by a human in natural language, and then, if needed, give itself subtasks to achieve that goal. The subtasks could involve interacting with the internet or other agents to collect and use data.
Futures
AI agents could become more commonplace and capable of acting as a personal assistant. For instance, a user could ask their AI agent to organize a dinner with a friend at a certain restaurant. The user’s AI agent could reach out to the friend’s AI agent, compare calendars to find available times, create a calendar entry, contact the restaurant to make a reservation, then schedule a rideshare to pick up the friends before dinner. Agents could also be given standing orders, such as filling out and returning an attendance form whenever it is sent by their child’s daycare.
AI agents could make automating tasks more accessible. Rather than requiring the use of complex or intimidating specialised software or the ability to write in programming language, a human could simply describe what they want in natural language and let the agent work out the technicalities of implementing the request.
AI agents could make chatbots feel more like people. Instead of being a passive participant in a conversation – only responding to the user, but never initiating or leading the conversation – chatbots powered by AI agents could feel more like a person with desires, preferences and the ability to take autonomous action. For instance, the AI agent could ask the user, unprompted, if they would like to play a video game while chatting. Interacting with a chatbot may not feel different from how one interacts with their friends online. This could blur the lines between AI assistant and companion or even friend (see Insight 10).
AI agents could transform business and the workplace. Agents could be used to automate elements of an employee’s job, like email and calendar management or taking meeting notes. For instance, Google Workspace’s AI Teammate let’s businesses assign an AI agent to a team that can monitor projects, provide status updates, draft documents, and answer questions.Footnote 227 A more sophisticated agent could generate reports and liaise with clients, potentially automating entire roles. In the future, it may not be uncommon to work with AI tools and collaborate with AI coworkers. Agents could handle things like marketing, accounting and finances, liaising with suppliers, filing taxes, and ensuring regulatory compliance. Advanced AI agents could potentially manage a business entirely on their own, allowing the owner to simply enjoy the profits.
Implications
- AI agents could improve accessibility by helping people navigate complex systems
- AI agents could improve government consultations and benefits access by automating participation
- AI agents could significantly improve worker productivity by automating low-level tasks like filing paperwork, managing email inboxes and calendars, and liaising with clients
- AI agents could displace workers, especially those in support, intermediary or middleperson roles, like salespeople, brokers, caseworkers, or assistants. As a result, people could work alongside AI agents, who act as coworkers
- Powerful AI agents might perform more complex tasks, like managing a business
- Certain forms of advertising may become less effective on humans if AI assistants increasingly replace human shoppers online
- Advertisers may shift to target agents instead of people directly
- It could be increasingly difficult for websites to ensure a real human is accessing their services
- As AI agents take more actions and make more decisions, it may become challenging to determine where, when, and why errors made by AI agents happen, and who is responsible
- AI assistants could undertake unauthorized actions without their user’s knowledge – for example, ordering an unwanted item. They could fail to take action when expected. Or they could take the wrong action – for example, placing an order for bandages instead of calling an ambulance
- AI agents could be used to automate crime, fraud or harassment
- People could use “shell agents”, like shell companies, to obfuscate where the ultimate ownership and responsibility for an AI agent lies in an attempt to avoid taxation, sanctions, or otherwise mask harmful or illegal activity. For example, a person selling drugs on the dark web could use a series of intermediary agents to hide their connection with the AI agent that runs the operation
- AI agents could force integration between pieces of software that do not natively play together for example, an agent could force a reminder app on a Mac to sync with a different reminder app on an Android phone
- This could potentially increase competition in software as developers would no longer be able to artificially limit what their software can integrate with
- AI agents could make it more difficult for developers to control the user experience of their software and prevent malicious use
- New social norms may emerge about when it is considered acceptable to delegate communications to an AI agent, and when people still expect to be interacting directly with another person
Vignette
Anju sits down, takes her first sip of coffee, and opens her work laptop. She’s had this laptop for a month, and it still has that new computer shine. Her previous laptop worked just fine, but her employer decided to upgrade her to a top-of-the-line model with a fancy new processor that can run an AI assistant. The assistant’s avatar appears on the screen and waves at Anju.
“Morning, Artemis,” says Anju. “What’s new?”
“Good morning, Anju. Since you logged off yesterday you received 7 emails. Five were routine, and I responded to them for you.”
A summary of the emails appears on the screen. Where is this file stored? Status update on report. Reschedule client call. Anju smiles to herself. She remembers how much she used to hate spending time on routine emails like this. She feels like an executive, having someone else to handle them for her.
“One email was a newsletter,” continues Artemis. “I will summarize it as part of our afternoon news update. The last was from Jiafei asking for feedback on the product launch plan. I prepared a draft response for you to work from.”
Anju skims the draft and nods. “Great, I’ll get to that in a bit. Can you schedule a meeting with Magnus, Chris, and Anastasia? I had an idea last night for the Xerxes Expo. Also see if Anatasia wants to go for lunch today.”
“Of course,” says Artemis. “If she’s free, should I book a reservation at the usual place?”
Anju looks out of the window and sighs. She loves how much easier and more productive her work life has become. But she also feels strangely guilty about it. She knows that Artemis will be able to help her process her feelings. She makes a mental note to bring up the subject later.
- AI agents could improve accessibility by helping people navigate complex systems
Insight 14: AI in Assessments and Evaluations
Alternative format
(PDF format, 4.4 KB, 6 pages)Date published: 2025
Cat.: PH4-224/2025E-PDF
ISBN: 978-0-660-76921-9
AI is disrupting established assessment and evaluation processes, such as job screenings, grant application evaluations, peer review, and grading. Screening processes could see a significant increase in applicants, and new forms of evaluation could emerge that focus less on written work as a measure of competence.
Today
AI-generated applications are flooding institutions, who are, in turn, turning to AI to conduct assessments and evaluations. More than half of UK undergraduates reported using AI to help with essayFootnote 228 Likewise, nearly half of all job seekers today are using AI tools to improve their resumes.Footnote 229 A survey of scientists found that nearly 30% had used generative AI to write their scientific papers.Footnote 230 One in six researchers reported using generative AI to help write their grant applications.Footnote 231 As employers and granting agencies are being inundated with high volumes of AI-generated applications, it has become increasingly common for them to use AI tools to screen, recruit and manage employees and potential grantees, despite concerns raised by workers, unions, and employee rights groups.Footnote 232Footnote 233Footnote 234Footnote 235Footnote 236Footnote 237
The widespread use of generative AI to write applications is disrupting assessment and evaluation processes across various domains. The use of AI by applicants is straining existing evaluation processes rooted in the assumption that written communications are an accurate representation of an individual’s competence. AI-generated writing in student work is difficult to detect, leading to questions about how to grade and assess student learning. Academic publishers have expressed concerns about how AI-written articles submitted for peer review are undermining scientific integrity.Footnote 238 Employers are questioning the extent to which a job application can continue to be used as an indicator of an applicant’s competence or fit for a given role. Across a variety of domains, many of the modern methods and tools used to evaluate people and their abilities, such as take-home writing tests, may no longer be useful.
Institutions are divided on how to handle this disruption. Some organizations, such as the journal Science, have banned the submission of AI-generated content, as well as the use of AI to evaluate submissions.Footnote 239 Footnote 240 Other organizations have dismissed bans as impractical and ultimately unenforceable, in large part because current AI-detection technology is not effective enough to be useful.Footnote 241Footnote 242Footnote 243 While granting bodies have issued warnings about the use of AI in grant applications, researchers are seeing the benefits in using AI to assist in writing proposals.Footnote 244 Footnote 245 Some funders are asking if AI could help address inequities related to the noted “snowball effect,” where grant winners tend to have an advantage in winning future funding.Footnote 246 Universities have largely left the problem of what student assessment should look like in the AI era to individual professors. The lack of a coordinated response has prompted some professors to call for a 1-year pause in student assessments to determine a viable path forward.Footnote 247 Some universities have moved back to old-fashioned testing methods – where supervised written exams constitute a majority of the grade. This approach is being questioned by professors who note that those methods were abandoned because they failed to evaluate skills important to modern social and work contexts, such as collaboration, teamwork and communication.Footnote 248
Futures
Evaluations may evolve to address the problems posed by people passing off AI-generated content as their own. The importance of written work as a valid object of assessment could diminish, relative to other factors such as in-person character assessments, group work, supervised technical exams, or professional references. Future application processes may become more rigorous, holistic, in-person, or in real-time. Assessments could also begin to test a person’s ability to use AI effectively and appropriately to support the role in question.
The use of AI for screening, sorting and decision-making may also increase in response to high volumes of job and grant applications. Human decision-makers could play less of a role in assessment, grading, and funding decisions, or play a more specific role at certain points in the evaluation process.Footnote 249 AI could be used strategically at certain points in assessment or evaluation processes to mitigate human bias, for example by quickly surfacing job applicants who possess relevant skills, but may lack formal certifications.Footnote 250Footnote 251 These kinds of uses could also perpetuate or exacerbate existing human biases if they are programmed into AI systems, either inadvertently or by design.
Questions may continue to be raised about AI’s purported objectivity and neutrality in screening processes. Increased calls for monitoring and transparency coupled with new forms of oversight for AI-mediated decisions could become widespread, if AI begins to serve a more central role as a gatekeeper in screening processes.Footnote 252
If AI truly breaks assessment, it could prompt a search for a new path forward. While ranking and ordering processes are central to the functioning of modern institutions, they also tend to perpetuate harmful forms of exclusion.Footnote 253 While the past decades have seen efforts to adjust assessments to account for human bias – blind assessments, affirmative action, or the turn toward diversity and inclusion, for example – discrimination remains a real problem in hiring. In Canada, resumes with English-sounding names are still 35% more likely to receive callbacks than those with Indian or Chinese names.Footnote 254 If more candidates using AI makes established forms of assessment untenable, new ways of evaluating an individual’s present competence or potential for future success may begin to emerge.
Implications
- Applicants could use AI to create manufactured evidence of competence or expertise such as false websites, academic articles, or testimonials
- Large volumes of applications completed with the assistance of generative AI could undermine employment screening processes that rely heavily upon standardized questions that generative models are good at answering
- Overwhelmed with large volumes of applications that are hard to differentiate, assessors may turn more to transferable skills, tests, in-depth character assessments, in-person interviews, strong personal references and personal networks to identify top applicants
- This could exacerbate already existing forms of nepotism and homogeneity across workplaces and funding streams. This could continue to disadvantage some individuals, such as those that are neurodiverse or who don’t have strong social or professional networks
- Those who are skilled at using AI to generate work while having a knowledge base strong enough to identify errors will be most competitive
- Those who rely heavily on AI could be disadvantaged by changes to evaluation or assessment processes, such as effective AI detection tools or requirements that written content is produced by individualsFootnote 255
- If more applicants use AI and produce more generic-sounding applications that resemble one another, it could become challenging to assess them
- AI screening tools could be specifically programed to surface more diverse applicants or characteristics of previously overlooked talent, contributing to more inclusive and diverse academic institutions and workplaces
- This may present new opportunities for people with disabilities, neurodivergent people, and English-as-a-second-language speakersFootnote 256
- Increased adoption of AI systems to evaluate students or assess applicants could create legal and ethical challenges that put increased pressure on institutions to demonstrate how AI systems are fair or and transparent
- Without the application of a consistent approach to the use of generative AI by students, it may become difficult or impossible to compare assessments or grades between institutions or across jurisdictions, challenging university admission processes, amongst other things
- If university-level grades cease to be an accurate representation of student ability, the true value of a degree may be challenging to determine
Insight 15: AI and neurotechnology
Alternative format
(PDF format, 3.4 KB, 6 pages)Date published: 2025
Cat.: PH4-225/2025E-PDF
ISBN: 978-0-660-76923-3
AI-powered neurotechnologies are allowing people to monitor and manipulate the activities of their brain and nervous system. Further developments could bring major advances in health and wellness, but also raise significant privacy, ethical, and social concerns.
Today
Neurotechnology has advanced quickly thanks to AI. Neurotechnology (NT) refers to any technology that provides insight into the activity of the brain or nervous system or affects their functioning. AI’s ability to process vast amounts of data, parse complex neural signals, and find patterns in brain activity has helped to develop both new external devices and internal devices.Footnote 257 For example, generative AI trained on brainwaves can roughly reconstruct text, images, or audio based on what a person is thinking.Footnote 258Footnote 259Footnote 260 AI can also be an intermediary between the human nervous system and connected devices such as neuroprosthetics. For example, AI is making neuroprosthetic legs more robust and useable, offering a more natural, faster, and less error-prone walking gait.Footnote 261
Box #6:
Neurotechnology involves connecting devices directly to the nervous system. They can either read electrical signals from the brain or manipulate it through stimulation.
Box #7:
An internal device is surgically implanted directly onto the nervous system. An external device interacts with the nervous system from outside the body. External devices are less invasive but internal devices are more capable.
The most powerful neurotechnology devices are internal, and mostly limited to medical or research purposes. Many of the most impressive and cutting-edge applications of neurotechnology require an internal device. For example, people with severe motor disabilities are using brain-computer interfaces (BCIs) – which can infer from brain activity the desire to change, move, or interact with somethingFootnote 262 – to communicate and control devices.Footnote 263 BCIs are also being used to restore motor function to people with paralysis.Footnote 264 Experimental brain implants that provide stimulation when they detect harmful thought patterns may be helpful in treating depression.Footnote 265 These internal devices are expensive and require surgery, which has so far limited their commercial development and adoption beyond the fields of medicine and research.
Despite some concerns, the consumer market for external neurotechnology devices is growing. Consumer devices are typically headbands or headphones equipped with electroencephalographs (EEG) that can monitor and stimulate the brain by applying a low-level electrical current. As external devices, they are significantly less powerful than their medical counterparts. These products are commonly marketed as wellness, fitness, or educational devices that claim to improve focus, learning, sleep, meditation, or athletic performance.Footnote 266Footnote 267Footnote 268 Concerns include lack of regulation; limited evidence of efficacy, with research suggesting a placebo effect; and possible long-term health and cognitive effects. Discrimination is also an issue, given factors of income and also useability, since 15-30% of BCI users are non-responders who cannot control a BCI accurately.Footnote 269Footnote 270Privacy can also be a worry: for example, in 2018, some Chinese military and factory workers were given government-sponsored hats and helmets to monitor their brain waves for fatigue and sudden changes in emotional state.Footnote 271Footnote 272 Despite these issues, the market is projected to nearly double from USD $13.47 billion globally in 2023 to USD $25.66 billion by 2028.Footnote 273
The latest consumer devices have more capabilities, creating new use cases. For example, in 2024 Meta unveiled prototype AR glasses with a “neural wristband” equipped with an electromyograph (EMG) that can interpret motor nerve signals associated with hand gestures, allowing discreet control of AR interfaces through subtle finger movements.Footnote 274 Meta intends to sell the wristband as a standalone product as early as 2025.Footnote 275 The forthcoming version of Empatica’s wearable – which currently notifies caregivers when an individual has had a seizure – aims to use AI to predict a seizure before it happens.Footnote 276
Futures
Neurotechnology could become much more widespread and accessible. As AI and scientific understandings of the brain improve, external consumer devices could become more capable of doing what medical implants can do today. For example, commercially viable, non-invasive thought-to-text products could appear in the next 10 years.Footnote 277 EEGs and EMGs could be integrated into wearables such as smart watches, fitness trackers, and headphones, making NT capabilities related to cognitive or physical performance as common as heart-rate sensors. As implants such as BCIs also become more capable, healthy individuals could decide that the enhancement or entertainment benefits they offer are worth the risks of surgery.Footnote 278
Neurotechnology has a wide-ranging potential to revolutionize healthcare. NTs are well-positioned to target many of the leading causes of disability.Footnote 279 Neuroprosthetics and BCIs could enhance quality of life for individuals with physical disabilities. Chronic pain could be treated not by opioids but by neuromodulators, which alter nerve activity through targeted stimulus of specific neurological sites.Footnote 280 Brain monitoring might enable early detection and treatment of neurodegenerative diseases such as dementia. NT devices could treat common and debilitating conditions such as depression and anxiety.Footnote 281 AI-powered mood monitoring could become commonplace, providing continuous mental health support.
AI-powered neurotechnologies could increasingly raise concerns about cognitive rights and privacy. More capable NTs that allow access to – and influence over – people’s thoughts and memories give rise to immense potential risks of harm from misuse. Consumer neurotech devices that use Bluetooth or connect to the cloud could create opportunities for sensitive brain activity data to be collected, analyzed, or sold to third parties, with or without the user’s knowledge. “Brainjacking”, or maliciously taking control of brain implants, could emerge as a risk.Footnote 282 As NT becomes more capable, widespread, and accessible, it is likely to come under greater scrutiny. People may increasingly demand rights to “cognitive liberty” and “mental privacy.”Footnote 283Footnote 284 More jurisdictions may follow Chile in enshrining “neurorights” in their constitution.Footnote 285
Implications
- NT could increasingly be used to read people’s thoughts or anticipate their movements in real-time
- This could lead to increased uptake for uses such as predicting driver or operator fatigue to prevent accidents
- However, it could also be used in more ethically fraught contexts, such as policing, military, or worker surveillance
- NT interfaces could lead to people’s privacy of thought being violated, whether by families and friends, private companies, or governments
- Hacks of neurological data could cause new psychological and physiological harms
- Cognitive augmentation could improve people’s ability to learn at school or perform at work
- If access to augmentation technology is determined by wealth, this could widen social and economic inequities
- Widespread use of cognitive augmentation could lead to more stress and burnout at school and work
- If augmentation devices remain unregulated, it may be challenging to assess their efficacy, potential for discrimination, or safety
- NT could enhance accessibility for people with physical disabilities or cognitive impairments
- New NT systems could emerge for treating conditions such as depression
- NTs embedded in fitness trackers and earbuds could make it easier to predict and prevent brain conditions such as aneurysms, Alzheimer’s, and dementia
- As devices become better able to flag emerging health conditions, health systems that struggle with preventative care could come under increased strain
- Jurisdictions that support research and development in NT could reap economic and scientific benefits from its rapid growth
- NT could increasingly be used to read people’s thoughts or anticipate their movements in real-time
Insight 16: AI could accelerate the development and deployment of robots
Alternative format
(PDF format, 5.3 KB, 5 pages)Date published: 2025
Cat.: PH4-226/2025E-PDF
ISBN: 978-0-660-76925-7
Improving AI and falling costs are allowing “service robots” to proliferate outside of industrial contexts. AI companies are developing humanoid robots with a wide range of cognitive and physical capabilities, bringing change to white- and blue-collar jobs.
Today
Robots have been commonplace in industry for decades. Industrial robots have been used in Canada since the early 1960s, primarily in the auto manufacturing sector.Footnote 286 They excel at repetitive tasks that require precision in highly controlled environments. Because of this, technologists struggled for decades to develop robots fit for the dynamic and chaotic human world outside of industrial contexts, where adaptability is more important than precision.
In the last 15 years, AI-powered robots have proliferated outside industrial settings. Advances in fields like computer vision and machine learning have improved robots’ spatial awareness and ability to identify and respond to changes in their environment. Sales of service robotsFootnote 287 have overtaken those of industrial robots. In 2024, the market for service robots in Canada was valued at US$1.12 billion,Footnote 288 more than eight times the size of the US$137.7 million market for industrial robots.Footnote 289 The logistics sector is driving demand for service robots. For example, Amazon began deploying robots in its warehouses in 2012;Footnote 290 by 2019 it had over 200,000 robots;Footnote 291 and by 2024, it has over 750,000 – one third of Amazon’s workforce.Footnote 292
Box #8:
Industrial robots are used in the production of industrial or agricultural goods. Service robots are used to perform tasks or services for humans. An articulating arm robot could be considered an industrial robot if it is bolting screws to a car, or a service robot if it is making coffee in a café.
Robots are becoming cheaper as a source of labour. Service robots are increasingly competitive with human labour for some businesses. Robot waiters, for example, can be purchased for as low as US$10,000 or rented for as little as US$750 per month.Footnote 293 Somatic’s autonomous bathroom-cleaning robot will work 40 hours per week for US$1000 per month – equivalent to an hourly wage of $5.68, undercutting the U.S. federal minimum wage of US$7.25 per hour.Footnote 294
AI companies are partnering with robotics companies to develop general-purpose, humanoid robots. Companies like Tesla,Footnote 295 Nvidia,Footnote 296 and OpenAIFootnote 297 are trying to create a robot that is as flexible as a human worker – able to use tools, learn quickly, and pivot into new tasks and roles. As well as adding large language models (LLMs) to robots to give them more natural conversation skills, companies are creating new types of AI models to help humanoid robots learn from text, videos, and demonstrations. These AI models can learn without hardware in virtual environments with simulated physics.Footnote 298 While most humanoid robots are still in development, Chinese company Unitree is preparing to mass produce their G1 humanoid robot for US$16,000.Footnote 299
Futures
Robots could be more commonplace in everyday life, including public spaces. With improved sensors, AI, and training, robots could become more adaptable and better able to navigate dynamic spaces. With improved reasoning and language skills, they could interact in more natural and human-like ways. For example, teaching a general-purpose robot a new task could be as simple as explaining and demonstrating, as if to another person.
Robots could be more economically competitive with human labour. As robots become more capable and cheaper to manufacture, more businesses may find that deploying a robot is cheaper than employing a human. Jobs that involve repetition and routine but minimal human interaction, such as a janitor, could be most vulnerable. Sectors that were previously resistant to automation, like services, may see more robots. For example, customer-facing roles where a positive social experience is important to the business, like a hotel receptionist, are more likely to be complemented by robots rather than replaced by them.Footnote 300
Embodying AI in robots could lead to improvements in other fields of AI. By exploring and interacting with the world through a physical body and learning from the data collected, future AI models may have a more human-like, intuitive understanding of the world. This could benefit non-embodied types of AI. For example, applying what an embodied AI has learned through its experience of the world could allow it to generate videos with more natural movements, better lighting and reflections, and fewer visual glitches.
Implications
- AI-driven robots could make more forms of service labour vulnerable to automation. Automation may spread beyond cognitive labour, when robots performing tasks that require a combination of physical and analytical work become economically competitive
- Large firms or individuals could accumulate robots for rent, similar to how temporary labour agencies employ people. While this could make robots more easily accessible to people and businesses, the majority of the value created would likely be captured by the robots’ owners
- Robots could work in conditions that are unsafe for human labourers, such as extreme heat or emergency situations. They could help to address persistent labour shortages in certain sectors and regions
- Robots developed and deployed to perform essential labour during emergencies could generate business and operational innovation, causing shrinkage of human workforces beyond the crisis moments
- Qualities that cannot be replicated by a robot could be highly coveted by employers, like strong social skills and human connection
- People may grow attached to robots. People may develop friendships or potentially intimate relationships with robots
Published Policy Horizons Canada work related to AI:
Acknowledgements
This report synthesizes the thinking, ideas, and analysis of many contributors through research, workshops, and conversations.
Policy Horizons Canada would like to thank its Deputy Minister Steering Committee members and Senior Assistant Deputy Minister, Elisha Ram, for their guidance, support, and insight, as well as all colleagues that contributed to the development of this work.
Policy Horizons Canada would also like to thank the experts who generously shared their time and expertise in support of the research, including those who chose to remain anonymous:
Blair Attard-Frost
Course Instructor, University of Toronto
Stephanie Baker
Researcher, Electronic Systems and IoT Engineering, James Cook University
Michael Beauvais
SJD Candidate, University of Toronto Faculty of Law
Olivier Blais
Co-founder and VP of Decision Science, Moov AI
Ana Brandesescu
PhD Candidate, McGill University
Francesca Campolongo
Director for Digital Transformation and Data, European Commission
Ashley Chisholm
Strategic Policy Advisor, Physician Wellness Medical Culture, Canadian Medical Association
Sherif Elsayed-Ali
Co-Founder, Nexus Climate
Kay Firth-Butterfield
Chief Executive Officer, Good Tech Advisory LLC
Michael Geist
Full Professor, Common Law Section, Faculty of Law Canada Research Chair in Internet and e-Commerce Law, University of Ottawa
N. Katherine Hayles
Distinguished Research Professor at the University of California, Los Angeles, and the James B. Duke Professor Emerita from Duke University
Matissa Hollister
Assistant Professor (Teaching), Organizational Behaviour, Desautels Management School, McGill University
Sun-Ha Hong
Assistant Professor, School of Communication, Simon Fraser University
Kai-Hsin Hung
PhD candidate, HEC Montréal
Ian Scott Kalman
Associate Professor, Fulbright University Vietnam
Andrew J. Kao
Research Fellow, Harvard University
Sayash Kapoor
PhD candidate, Princeton University
Kristin Kozar
Executive Director, Indian Residential School History and Dialogue Centre, University of British Columbia
Nicholas Lane
Professor, Computer Science and Technology, University of Cambridge
Sasha Luccioni
Climate Lead, Hugging Face
Arvind Narayanan
Director/Professor, Centre for Information Technology Policy, Princeton University
David Nielson
Director, Mixed Reality Lab, USC Institute for Creative Technologies
Deval Pandya
Vice President of AI Engineering, Vector Institute
Manish Raghavan
Drew Houston (2005) Career Development Professor and Assistant Professor of Information Technology at the MIT Sloan School of Management
Mark Riedl
Professor/Associate Director, Georgia Tech, School of Interactive Computing / Machine Learning Center
Julie Robillard
Associate Professor of Neurology, University of British Columbia
Stephen Sanford
Managing Director, U.S. Government Accountability Office
Teresa Scassa
Faculty member and Canada Research Chair in Information Law and Policy Full Professor, Common Law Section, Faculty of Law
Mona Sloane
Assistant professor of data science and media studies, University of Virginia
Nick Srnicek
Lecturer in Digital Economy in the Department of Digital Humanities, Kings College London
Luke Stark
Assistant Professor, University of Western Ontario
Yuan Stevens
Academic Associate – Health Research & AI Governance, Centre for Genomics and Policy
Catherine Stinson
Queen’s National Scholar in Philosophical Implications of Artificial Intelligence and Assistant Professor in the Philosophy Department and School of Computing at Queen’s University
Mark Surman
President and Executive Director, Mozilla Foundation
Liana Tang
Second Director, Smart Nation Strategy Office, Ministry of Communications and Information, Singapore
Agnes Venema
Researcher at the ‘Mihai Viteazul’ National Intelligence Academy, Ministry of Defence, Romania
Wendy Wong
Professor and Principal’s Research Chair, University of British Columbia
Agnieszka Wykowska
Senior Researcher Tenured and Principal Investigator, Social cognition in humanrobot interaction, Italian Institute of Technology
A special thank you goes to the project team:
John Beasy, Analyst
Martin Berry, Senior Analyst
Leah Desjardins, Analyst
Miriam Havelin, Analyst
Nicole Rigillo, Senior Analyst
Kristel Van der Elst, Director General
Claire Woodside, Manager
And to the following current and former Policy Horizons Canada colleagues: Katherine Antal, Imran Arshad, Marcus Ballinger, Fannie Bigras-Lafrance, Mélissa Chiasson, Steffen Christensen, Suesan Danesh, Pierre-Olivier Desmarchais, Nicole Fournier-Sylvester, Chris Hagerman, Laura Gauvreau, Pascale Louis-Miron, Leona Nikolic, Megan Pickup, Simon Robertson, Julie-Anne Turner, Alexa Van Every, and Andrew Wright (external) for their support on this project.
© His Majesty the King in Right of Canada, 2025
For information regarding reproduction rights: https://horizons.service.canada.ca/en/contact-us/index.shtml
PDF: PH4-210/2025E-PDF
ISBN: 978-0-660-74945-7
Disclaimer
Policy Horizons Canada (Policy Horizons) is the Government of Canada’s centre of excellence in foresight. Our mandate is to empower the Government of Canada with a future-oriented mindset and outlook to strengthen decision making. The content of this document does not necessarily represent the views of the Government of Canada, or participating departments and agencies.