The ninth edition of the Singapore FinTech Festival (SFF) wrapped up on 8 November, attracting an impressive 65,000 participants from 134 countries. The global gathering saw an influx of over 3,400 government and regulatory attendees representing central banks, regulatory institutions, and other government agencies in the Regulation Zone, where discussions focused on the policy and regulatory implications of emerging tech in finance, including Artificial Intelligence (AI) solutions, quantum research, cross-border data flow, and digital assets.
Griffith Asia Institute (GAI) members, Shawn Hunter and Professor Andreas Chai, joined Griffith Business School colleague, Professor Ernest Foo and Elena Maran, Global Head of Financial Services and Responsible AI, Modulos AG, to contribute to these critical dialogues, making up a panel on “Dark Tech: How AAI and Malicious AI Solutions for Fraudsters are Evolving and How We Can Stop Them.” Their insights highlighted the challenges posed by adversarial AI in financial crime and explored strategies to counter such threats.
In the rapidly evolving world of financial technology (fintech), a dark side has emerged, as bad actors increasingly leverage AI to facilitate scams, evade detection, and exploit vulnerabilities. Known as “dark tech,” this subversive use of AI has been a concern for as long as the internet has existed, with milestones like Napster in the ’90s and the advent of the dark web in the 2000s. Today, AI tools like FraudGPT and WormGPT are being used by cybercriminals to conduct sophisticated fraud, often exploiting weaknesses in banking systems. With these malicious AI solutions, fraudsters can predict the behaviours of institutions, generate convincing fake content, and carry out blackmail schemes. By feeding false data or modifying malware, they evade traditional security measures, posing a severe reputational threat to the fintech industry.
Adversarial AI (AAI) techniques, such as evasion and poisoning attacks, amplify these threats. Evasion attacks subtly alter malware to bypass detection, while poisoning attacks inject false data to corrupt AI models. Other techniques, like model inversion, expose sensitive data, raising privacy risks. As AI-powered fraud becomes more prevalent, experts emphasise the need for robust defences, like regular algorithm audits and adversarial training to help models resist malicious inputs. This is particularly important for AI in finance, where the consequences of compromise can lead to financial loss, privacy breaches, and even push customers away from formal financial services in favour of cash and shadow banking.
Increased reliance on AI can unintentionally disadvantage marginalised groups, who are more susceptible to fraud due to lower digital literacy. Collaborative efforts between financial institutions, regulators, and developers are critical. Comprehensive frameworks, like the EU AI Act, require transparency and accountability for high-risk AI systems, while cybersecurity regulations and international partnerships are essential to mitigate risks. Only through proactive measures and continuous research can the fintech industry protect itself against these evolving dark tech threats while ensuring financial inclusion remains accessible to all.
Organised by the Monetary Authority of Singapore (MAS), Global Finance and Technology Network (GFTN), and Constellar, in collaboration with The Association of Banks in Singapore (ABS), the SFF has solidified its reputation as a premier global platform since its inception in 2016. This year’s festival, held from 6 to 8 November, proved to be a significant milestone for shaping the future of technology, policy, and finance, underscoring Singapore’s role as a hub for fintech innovation.