BANFF ’25: Protecting creatives in the AI industrial revolution

The festival's final two sessions focused on regulating AI and providing an in-depth look at the AI tools currently available to creatives.

Artificial intelligence (AI) can be a cost-effective tool, but safeguards are needed to protect human creativity, according to speakers on the final day of the Banff World Media Festival.

Raja Khanna, executive chair of Toronto-based prodco and studio operator Dark Slope (pictured centre), said AI is a new industrial revolution, creating an opportunity to lower costs and make manufacturing projects more efficient.

“It’s coming at a time when the industry needs that the most, when commissioning dollars [and] budgets are going down,” he said during the Regulations, Ethics, and AI – Oh My! panel, moderated by The Ankler‘s Elaine Low (pictured far right). “If you’re a producer and you’re making shows that have budgets of [$300,000] an hour, and you’re not thinking about how you can deliver that same quality at $200 or $250 [thousand] right now, you’re in trouble,” he said.

Some of the AI tech currently available to producers was outlined in a later presentation by Gavin Purcell and Kevin Pereira, co-founders of L.A.-based podcast AI for Humans. One examples was the tool Act One from Runway ML, which allows users to upload webcam footage used to puppeteer an avatar and can be used in animation storyboarding. Another example was Higgsfield, which can create VFX shots like explosions in minutes, which they said can be used in the previsualization stage.

The pair were optimistic on the technology and the rapid pace of development for these AI tools. For example, Purcell said Veo3 was the first time he saw people remarking they were unable to tell the AI apart from the real thing.

With rapid growth comes the need for regulation. Khanna said regulation should come not just at the industrial or government level, but self-regulation for producers when they make decisions about how and when to use AI. For what he called “human in the loop” jobs, Khanna recommended companies build out their own policies surrounding AI use.

Canada has yet to pass any regulations around the use of AI. The House of Commons was in the midst of reviewing Bill C-27, which would have enshrined rules around consumer privacy and data, but it died on the order paper when former Prime Minister Justin Trudeau prorogued Parliament in January.

Writers Guild of Canada executive director Victoria Shen (pictured centre right), claimed the WGC was the first guild in Canada to negotiate AI protections into its collective agreement. She said the WGC will be keeping a close eye on Canada’s regulatory developments and the work of newly established Minister of Artificial Intelligence and Digital Innovation Evan Solomon.

Shen highlighted the launch of a manifesto this week from to the six main unions in Quebec’s cultural industry, including AQTIS 514 IATSE and DGC Quebec. The manifesto is targeted at the Quebec and Canadian governments to defend arts and culture.

“We want to see safeguards specifically around the area of individual rights [and] labour rights, as well as copyright protection for creators,” as well as terms of use and compensation, said Shen on some of the recommendations in the manifesto.

But even within the writers’ realm, Shen pointed out nuances in the AI conversation, particularly regarding note-taking. “Is that either going to remove somebody’s job, or is that going to make somebody’s job a bit easier so they can now focus more of their time and energy on the creative piece?” she asked.

The challenge for any regulation is trying to predict where the fast-moving industry will go next, argued Devrin Carlson-Smith, founder and CEO of Park City, Utah-based Extraordinary AI (pictured far left). While there are aspects to cover from a macro perspective, such as issues around deepfakes and IP protection, there are preliminary actions companies and organizations can take right now to be better prepared for AI.

“We can work in areas that are behind the scenes, that are safe, that are not necessarily high risk categories,” said Carlson-Smith. “Production is a great one, operations, finance [and] legal. So many of those areas are very efficiently set up to be able to use AI effectively.”

While panelists were in agreement that any legislation would provide a precedent for the future, Benjamin Field, founder of Cardiff, U.K.-based, AI-focused Deep Fusion Films (pictured centre left), claimed the damage is done when it comes to IP theft.

“We are absolutely working on situations where we can find restoration for material that is ingested in the future, but anything that [AI] has already been trained on is gone. That money is lost,” said Field. “It’s slightly different for visual media, but any scripts, unfortunately that train has left the station.”

The issue remains an evolving one. While the panel was taking place, news had dropped that Disney and Universal had filed a lawsuit against AI company Midjourney. claiming copyright infringement. The companies alleged that users can use the tech to create AI-generated images using IP from their works, such as characters from Star Wars or Shrek.

Shen acknowledged the current legal tools are behind what the technology is presently capable of, and any compensation would not amount to much, but simply that writers “want the illegal use of their work to stop.”

Photo by Kristian Bogner