AUSTRAC Artificial Intelligence Transparency Statement

On this page

Introduction

AUSTRAC performs a dual role as Australia’s anti-money laundering and counter-terrorism financing (AML/CTF) regulator and financial intelligence unit. This dual role helps to build resilience in the financial system, and enables AUSTRAC to use financial intelligence and regulation to disrupt money laundering, terrorism financing and other serious crime.

As Australia’s AML/CTF regulator, we regulate more than 17,000 businesses that provide financial, gambling, bullion, remittance and digital currency exchange services. We ensure regulated businesses comply with their obligations to have systems and controls in place to manage their risks and protect them and the community from criminal abuse.

As a financial intelligence unit, we collect and analyse financial reports and information to generate financial intelligence. This financial intelligence contributes to law enforcement and national security investigations. Our specialist analysts generate targeted, actionable intelligence and work closely with industry, government and law enforcement partners to deliver tangible investigative and operational outcomes.

New and emerging technologies are changing the way services are delivered. Criminals are always becoming more sophisticated and developing new ways to exploit vulnerabilities in the Australian financial system. To meet this challenge, we will continue to evolve how we work with industry and our partners, and adopt technologies such as Artificial Intelligence (AI) to support our specialist regulatory and intelligence capabilities.  

AUSTRAC aims to be transparent about the way we use AI in our agency, and how we intend to approach adoption in the future. Where we have deployed AI, we comply with whole-of-government guidelines to ensure we meet the highest standards of security, privacy, and regulatory compliance. Our current approach and intended future use of AI is to leverage new techniques which advance outcomes while ensuring humans remain a key part of the decision making process. This statement will be reviewed annually and updated when:

  •  we deploy or engage with new AI technologies across our agency
  • make a significant change to our approach to AI, or 
  • new factors impact this statement. 

AUSTRAC’s intelligence functions are part of the national intelligence community, as defined under Section 4 of the Office of National Intelligence (ONI) Act 2018. It should be noted that the Responsible Use of AI in Government Policy specifically exempts AUSTRAC’s intelligence functions from compliance with requirements of the policy, including this transparency statement.  We may voluntarily adopt elements of this policy with respect to our intelligence functions, where we are able to do so without compromising national security capabilities or interests.

How we use AI

AUSTRAC defers to the DTA’s definition of an Artificial Intelligence (AI) system as:

A machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions, that can influence physical or virtual environments.

Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

AUSTRAC uses generative AI tools to undertake research and discovery, and for workplace productivity purposes. This is in alignment with the Digital Transformation Agency (DTA) Interim guidance on government use of public generative AI tools. Privacy controls used in these processes are referred to in the relevant section of this statement.

AUSTRAC has not yet deployed any use of AI which directly interacts with the public or is involved in decision making and administrative action without human intervention. This includes automated decision making and automated communication with our stakeholders. 

AUSTRAC is considering the future adoption of AI for the purposes of Workplace Productivity and Analytics for Insights. These usage patterns are currently in the pilot stage where they will be evaluated in alignment with the Policy for Responsible Use of AI in Government and internal AUSTRAC AI Policy.

Upon adoption, AUSTRAC will not use these AI methods without human intervention, monitoring and decision making. AUSTRAC also utilises statistical rules-based modelling and machine learning to:

  • identify indicators of money laundering, terrorism financing, or financially enabled crime
  • support our financial intelligence analysts to generate actionable financial intelligence for our partners. 

These methods do not meet the definition of AI due the rules-based nature of the inputs to these techniques, which are determined by humans.

Over the next 2 years, AUSTRAC will implement the AML/CTF Reform priorities and deliver on our data and digital transformation priorities. To support this work, we are actively looking at opportunities to leverage generative AI tools to uplift our regulatory, intelligence and corporate operations.

Usage patterns and domains

The Policy for Responsible Use of AI in Government requires AUSTRAC to state the usage pattern(s) and domain(s) associated with our use of AI. For more information, refer to the Digital Transformation Agency (DTA) classification system for AI use.

AUSTRAC’s current AI usage patterns are:  

  • Workplace Productivity: Used to support non-sensitive research through the collation of publicly available information.

AUSTRAC’s current AI domains are: 

  • Corporate and Enabling: Supports corporate functions to improve operational efficiency and productivity.

AUSTRAC is currently developing and piloting uses of AI which will in the future:

  • Expand usage patterns to include: 
    • Analytics for Insights to identify, produce and understand insights within structured or unstructured materials
    • Workplace Productivity to include virtual assistants.
  • Expand domains to include:
    • Service Delivery to provide tailored and responsive services directly to external and internal stakeholders, through supporting staff who deliver these services.
    • Law Enforcement, Intelligence and Security to support law enforcement and intelligence agencies through AI-enabled analysis of data from various sources which aid intelligence gathering.

Data privacy and security

Protecting the privacy and security of sensitive and classified information and the data of individuals is of paramount importance to us. We ensure that data is handled in compliance with the applicable legislations and regulations; Privacy Act 1988 (Cth), the Protective Security Policy Framework and other relevant data protection laws. Personal information is only collected, used, and shared where necessary in line with our privacy policies.

In accordance with internal guidelines and policies, staff who use publicly available generative AI tools for research purposes will not include or reveal any classified, personal or otherwise sensitive information. All activities will align with legislation and policies relating to information and data.

AI safety and governance

AUSTRAC’s AI Accountable Official is the General Manager, Data (Chief Data and Analytics Officer). AUSTRAC is committed to implementing AI systems to align with evolving legislation, ethical standards, and public expectations. As we deploy AI into our regulatory, intelligence and corporate operational capabilities, we will follow whole-of-government guidelines to ensure any use of AI is guided by the following key principles:

  • Enable: AUSTRAC will engage with AI in safe, ethical and responsible ways, including establishing clear accountabilities for AI adoption, use and explainability.
  • Engage responsibly: AUSTRAC will ensure AI use is ethical, responsible, transparent and explainable, and that proportionate and targeted risk mitigations are applied for AI use cases.
  • Evolve: AUSTRAC will apply flexibility and adaptability to AI uses to accommodate technological advances, including ongoing review and evaluation of existing AI use cases and exploration of new approaches. 

To further support these efforts, AUSTRAC has established governance practices, policies and guidance to ensure the ethical, transparent, and secure implementation and use of AI. This includes:

  • AUSTRAC’s AI Policy and guidance to staff relating to use of information technology systems and AI, inclusive of public generative AI tools.
  • Availability of AI training for all staff to enable appropriate use of AI systems.

As the use of AI within AUSTRAC expands, consideration will be given to additional governance processes and practices which will ensure the appropriate, ethical, and safe use of AI.

Contact information

We will regularly review and update our AI policies and practices as part of our ongoing commitment to the responsible use of AI. This includes staying informed about new developments in AI technology, ethics, and regulatory requirements. We will strive to improve the transparency, fairness, and effectiveness of our use of AI systems through continuous learning and adaptation.

This statement was last updated on 28 February 2025 and will be reviewed annually, when we make a significant change to our approach to AI, or when new factors impact this statement.

If you have questions or would like more information about how AUSTRAC uses AI, contact us.

The content on this website is general and is not legal advice. Before you make a decision or take a particular action based on the content on this website, you should check its accuracy, completeness, currency and relevance for your purposes. You may wish to seek independent professional advice.

Last updated: 27 Feb 2025
Page ID: 1173

Was this page helpful?

Was this page helpful?
Please note that feedback you provide here will be used only for the purpose of improving our website. If you have a specific question about your AML/CTF obligations, please contact us.