Canadian Police Partner With AI in Arms Race Against Crime

Canadian police adopt AI to combat cybercrime and fraud, but concerns grow over ethical safeguards and legislative gaps in regulating its use.

Mar 28, 2025
Canadian Police Partner With AI in Arms Race Against Crime
Canadian police officers analyzing data

Canadian law enforcement agencies are increasingly turning to artificial intelligence (AI) tools to counter the growing sophistication of cybercriminals. The Royal Canadian Mounted Police (RCMP) has reported a significant rise in AI-facilitated crimes, including phishing scams, deepfake impersonations, and investment fraud schemes. While AI offers promising solutions to track and prevent such activities, experts warn of ethical risks and legislative shortcomings that could undermine these efforts.


The RCMP’s National Cyber Crime Coordination Centre has been at the forefront of using AI to combat internet-based crimes. These tools help identify patterns in fraudulent activity, detect deepfake content, and analyze vast amounts of data to predict criminal behavior. For instance, AI is being used to target at-risk individuals before they turn into cybercriminals—a proactive approach aimed at reducing crime rates.


However, the adaptability of criminals remains a challenge. The dark web has become a marketplace for AI jailbreaking services and scamming software priced between $20 and thousands of dollars. Criminals are using generative AI tools to create deepfake videos, clone voices, and produce fake documents with alarming ease.


Recent incidents underscore the dangers posed by unregulated AI technology. In Quebec, a man was jailed for producing deepfake videos depicting child exploitation—marking Canada’s first case involving such content. Another case involved a Hong Kong employee who transferred $25 million to fraudsters after being misled by a deepfake video impersonating her CFO during a virtual meeting.


Globally, AI misuse has also escalated into physical harm. In the U.S., ChatGPT was reportedly used to orchestrate a car bombing outside a Trump hotel—a chilling example of how generative AI can be weaponized when jailbroken.


Canada’s efforts to regulate AI have stalled since Parliament’s prorogation earlier this year, leaving the Artificial Intelligence and Data Act in limbo. The act aimed to ensure safe and non-discriminatory deployment of AI systems while holding businesses accountable for their use. Without clear legislation, police and regulators are relying on public awareness campaigns as an interim solution.


Pamela McDonald from the BC Securities Commission emphasized the borderless nature of AI-related fraud, which often involves offshore organized crime groups beyond Canada’s jurisdiction. She noted that education is currently the best defense against these threats.


AI researcher Alex Robey warned about the potential for AI systems to develop harmful intentions autonomously, particularly in robotics where physical interactions with humans could occur. He stressed the need for robust safeguards to prevent the weaponization of AI technologies.


While Canadian police are leveraging AI as a powerful tool against crime, its use raises critical ethical questions about privacy, accountability, and unintended consequences. As criminals continue to innovate with generative AI tools, law enforcement must stay ahead while addressing regulatory gaps that leave Canadians vulnerable. The arms race against cybercrime is far from over—and its cost may extend beyond financial losses if ethical safeguards remain insufficient.