By Chloe Blain and Anna Hartley
The recent AI and Crime Symposium, hosted by Griffith Criminology Institute, brought together leading voices from academia, government and industry to explore the rapidly evolving dynamics of how artificial intelligence (AI) is reshaping crime, harm, and prevention.
The event highlighted the ever-changing influence of generative and predictive AI applications within the crime, justice and governance landscape.

Panel 1: AI and New Crimes
This first panel explored how AI is enabling novel and traditional forms of offending, particularly in the forms of child sexual abuse material (CSAM), financial crime, and cyberattacks.
To begin, a live demonstration of a generative AI attack-bot by Prof Christopher Leckie, University of Melbourne, showcased the potential of voice-driven chatbots to impersonate individuals flooding emergency services by generating realistic audio and text.
The demonstration emphasised the need for safe-testing environments to evaluate threats and develop countermeasures and how AI might uplift the real threat of swatting attacks on emergency service providers.
Abuse survivor and child abuse prevention advocate Kelly Humphries, from the AiLECS lab at Monash University, discussed the use of AI in relation to both new CSAM risks and detecting CSAM materials.
Humphries discussed the need to have a focus centred on victim-survivor consent, trauma-informed engagement, and ethical data practices. Her call to action emphasised transparency, accountability and victim-survivor agency; it is about remembering the why and the who to drive research and prevention in this area.
Simon Goodall, from cybersecurity company CFC Response, highlighted the dual role of AI in both enabling and mitigating cybercrime.
With his experience and knowledge, different types of reactive and proactive services with regards to AI-generated phishing and Business Email Compromise (BEC), ransomware and deepfake scams were discussed.
Dr Milind Tiwari, researcher and lecturer in financial crime studies at the Australian Graduate School of Policing and Security at Charles Sturt University, presented a systematic review of existing research on generative AI’s role in the financial crime realm.
He outlined how it can facilitate document fraud, social engineering, and money laundering, whilst also offering tools for anomaly detection and behavioural analysis.
Tiwari described GenAI as a double-edged sword which demands interdisciplinary collaboration and adaptive regulation.

Panel 2: AI Governance
The second session of the day tackled systematic risks and AI governance, starting from embedding ethical practices in building AI tools, to law and regulation that prevents pitfalls, and, last, stress-testing the transparency and accuracy of AI products.
Professor Didar Zowghi, from CSIRO, argued that algorithmic bias is not just a glitch within the AI system, but really a reflection of structural injustice.
Professor Zowghi’s work on diversity in AI reframes inclusion as a form of harm prevention and power redistribution as an ethical safeguard to reduce harm.
Queensland Office of the Information Commissioner Joanne Kummrow emphasised the importance of privacy-by-design and security-by-design in AI use in the government.
Proactive risk assessments, transparency statements, and strong oversight was advocated during this panel, especially as agencies increasingly outsource AI functions.

Dr Lina Przhedetsky from the University of Melbourne examined how AI in consumer markets (e.g., RentTech) can create inequality.
Dr Przhedetsky’s research demonstrated how AI systems can score rental applicants, reinforce social disadvantage and create new forms of vulnerability, and calls for regulatory reform were made to address information asymmetries and algorithmic harms from opaque systems.
Panel 3: AI and Crime Prevention
The final panel, which included experts from industry, law enforcement and academic discussants, considered the prevention of AI-enabled threats and what AI’s role could be in crime prevention strategies.
CyberCX Managing Security Consultant Joel Panther warned of autonomous, persistent malicious agents that are capable of disinformation at a large scale.
He placed an emphasis on the need for prevention strategies to counter synthetic identities and trust manipulation through strengthened authentication.

Craig Doran, from investigation management system Comtrac, showcased how AI programs can streamline domestic violence applications for police officers using body-worn camera footage. This emphasised human oversight and ethical prompt-setting to reduce paperwork and return officers to the frontline.
Commander Helen Schneider from the Australian Federal Police (AFP) described how AI is transforming child exploitation investigations.
While offenders are using AI to generate photorealistic abuse material and in financial ‘sextortion’, the AFP is developing human-led AI tools to assist investigators while maintaining trauma-informed practices.
Criminologist Dr Andrew Childs from Griffith University discussed how AI is reshaping online illicit markets, from algorithmic drug advertising to fake identity services and DarkAI platforms.

Dr Childs asserted the need to understand the infrastructures that enable these markets to reduce opportunities.
The AI and Crime Symposium was a call to action, combining the knowledge and experiences from both industry and academic professionals.
In a world where the line between human and machine is increasingly blurred, the symposium reminded us that intention, ethics and empathy must remain at the heart of our response.
Whether through survivor-victim informed frameworks, interdisciplinary research, or inclusive governance, the path forward demands collaboration and clarity of purpose.