As the Standing Committee on Canadian Heritage continued its recently launched study on AI’s effects on the Canadian cultural sector last week, the discussion centred on the rapid development of the tech and ensuring that a Canadian perspective is included in global AI models.
Early in the Committee’s Oct. 27 meeting, Wyatt Tessari L’Allié, founder and executive director of Artificial Intelligence Governance & Safety Canada, argued that the current state of “slop” AI content is a passing phase, and that the technology will soon be able to surpass the capabilities of human creators across fields such as culture, business and politics.
“This 2025 study on the impacts of AI on the creative industries is like a December 2019 study on the first coronavirus outbreak in Wuhan, China,” said L’Allié. “You stumbled across an early warning sign, an industry that has happened to be hit hard by an early wave of AI. The big story is what’s coming next, and the biggest impacts will be elsewhere.”
L’Allié said that Canada’s response for the creative industries — giving examples such as content labelling and deepfake protections — could be an opportunity to pilot initiatives that can be pivoted to other industries later. He also noted that Canada’s response must be global, as even if Canada were to create robust legislation, AI development in other nations would continue and still affect Canadians.
Joining the proceedings last week to represent the Canadian Media Producers Association (CMPA) were Alain Strati, SVP, industry, policy and general counsel, and Lisa Broadfoot, VP, industry and business affairs.
In his testimony, Strati (pictured) emphasized that creators’ works must not be used by models without permission, that they must be compensated if their work is used, and that AI models must be transparent about the data they are using for inputs. He also said that the current Copyright Act does not need a new text and data mining exception, which “would simply authorize what is now unlawful: the mass ingestion of copyrighted material without consent, credit or compensation.”
Broadfoot said that the CMPA supports the development of a licensing market where producers and rights holders can negotiate the use of their properties for purposes such as AI model training. She pointed out that rights holders are already trained in negotiating sales through distribution agreements, which they will be able to translate to the rapidly developing technology.
The CMPA also talked about how producers are already using AI to support their businesses. While these practices are still nascent, Broadfoot said producers have utilized AI for purposes such as scene optimization through pre-visualization, scheduling, and to create more options for areas like visual effects.
L’Allié later introduced the subject of how AI is becoming less reliant on human data, with some models using their own outputs as further training data.
Due to this developmemt, L’Allié argued that licensing content to train AI models is not a viable business model for creators, and there may be a need to look at changing the artistic business model to something such as a basic income.
In a meeting on Oct. 29, University of Ottawa law professor Michael Geist said that the creative industries must work with AI services to develop transparency measures to appropriately distinguish between AI- and human-created works.
Geist added that copyright infringement in the outputs of AI is not always clear-cut, and may fall under the jurisdiction of fair dealing.
“The outputs of AI systems rarely rise to the level of actual infringement, given that the expression may be similar or inspired by a source, but is not a direct copy of the original,” said Geist. “The inputs, such as inclusion in large-language models (LLMs), is currently the subject of numerous lawsuits, but few have, to date, resulted in liability, since those cases suggest that LLM inclusion and the resulting data analysis often qualifies as fair use or fair dealing.”
Geist argued that an over-indexed response on the side of Canadian copyright could create barriers that would make Canada uncompetitive in the global market, with AI development shifting outside of the country. This could lead to the exclusion of Canadian content in LLMs, resulting in a reduced Canadian presence in AI outputs.
“The answer to Canadian AI cultural relevance is more Canada in the training data. That doesn’t come from more regulation, legal barriers or higher costs. Rather, it requires transparency on data sets, reducing costly barriers to access and the development of public AI systems that encourage the use and availability of Canadian content,” he said.
Nikita Roy, CEO of Newsroom Robots Lab, said the most significant risk to creators in the AI era will be invisibility, as foreign AI companies rewrite the internet’s algorithms.
“If our data, our languages and our voices aren’t part of global models, we lose presence, we fade from the world’s informational maps,” said Roy. “And these aren’t just traditional copyright questions, they are context rights questions: the right to be represented, visible and understood.”
To combat this, Roy introduced three steps that could support Canadian creators: protecting how Canadian stories, languages and knowledge are used and understood within AI systems; investing in creators so that they can shape AI instead of being shaped by it; and creating an ethically governed Canadian data commons to act as a public library for the AI age.
The Committee continues its study this week with meetings today (Nov. 3) and on Wednesday (Nov. 5).
Image courtesy of ParlVU