Do You Need to Tell Customers You're Using AI? Australian Transparency Rules
Your business is using AI. Do your customers need to know? The answer is more nuanced than a simple yes or no. Here's what Australian law and best practice says.
Key Takeaways
- Australia doesn't yet have a specific law requiring AI disclosure in all circumstances, but existing laws create disclosure obligations in many situations.
- The Australian Consumer Law prohibits misleading or deceptive conduct, which can include failing to disclose AI involvement.
- The Privacy Act requires transparency about how personal information is handled, including by AI systems.
- The ACCC has signalled that non-disclosure of AI may constitute misleading conduct in certain contexts.
- Regardless of legal requirements, proactive disclosure is a best practice that builds customer trust.
You've deployed an AI chatbot on your website. It's handling customer enquiries brilliantly, saving your team hours every week, and customers seem happy with the fast responses. But then a customer asks: "Am I talking to a real person?" What are you obliged to tell them?
Or perhaps you're using AI to generate product descriptions, personalise marketing emails, or assess loan applications. Do your customers need to know that AI is involved in these processes?
The short answer is: it depends on the context. The longer answer involves navigating several overlapping legal frameworks and a healthy dose of common sense. Let's break it down.
The Current Legal Landscape
As of 2026, Australia does not have a standalone law that specifically requires businesses to disclose their use of AI in all situations. Unlike the EU's AI Act, which mandates disclosure for certain AI applications, Australia has taken a more principles-based approach.
However, the absence of a specific AI disclosure law does not mean there are no disclosure obligations. Several existing laws create requirements that effectively mandate AI disclosure in many business contexts.
Free: 25-Task Automation Checklist
The exact checklist we use to audit $1M–$50M businesses. See what you should be automating.
No spam. Unsubscribe anytime.
Australian Consumer Law: The Misleading Conduct Test
The Australian Consumer Law (ACL), which is part of the Competition and Consumer Act 2010, prohibits businesses from engaging in conduct that is misleading or deceptive, or is likely to mislead or deceive. This broad prohibition is the primary legal basis for AI disclosure obligations in Australia.
The key question is: would a reasonable consumer be misled by the absence of an AI disclosure? The answer depends heavily on the context:
High Disclosure Risk: AI Chatbots and Virtual Agents
When customers are interacting in real-time with an AI that could reasonably be mistaken for a human, failing to disclose is likely misleading. If your chatbot uses a human name, a conversational tone, and doesn't identify itself as AI, a customer could reasonably believe they're talking to a person. The ACCC has indicated that this type of non-disclosure is problematic.
Moderate Disclosure Risk: AI-Generated Content
Using AI to generate product descriptions, marketing copy, or content falls into a grey area. If the content is accurate and not misleading in itself, the mere fact that AI generated it may not require disclosure. However, if the content implies human expertise or personal experience (e.g., a product review that reads as if written by a person who used the product), non-disclosure could be problematic.
Lower Disclosure Risk: Backend AI Processing
Using AI for internal processes like inventory management, demand forecasting, or workflow automation typically doesn't require customer-facing disclosure under consumer law. The customer isn't interacting with the AI and isn't being misled about anything. However, privacy obligations may still apply if personal information is involved.
The ACCC's Position on AI Transparency
The Australian Competition and Consumer Commission (ACCC) has become increasingly vocal about AI transparency. While they haven't issued binding AI disclosure rules, their public statements and investigations provide clear signals about where the regulatory wind is blowing.
The ACCC has expressed concern about several AI-related practices:
- AI-generated reviews and testimonials: The ACCC considers fake or AI-generated reviews that appear to be from real customers as misleading conduct. This is already being actively enforced.
- AI-generated endorsements: Using AI to create content that appears to be a genuine human endorsement without disclosure is considered deceptive.
- AI chatbots impersonating humans: The ACCC has indicated that AI chatbots should identify themselves as AI, particularly in sales and customer service contexts.
- AI-driven pricing: While dynamic pricing itself isn't illegal, using AI to personalise prices in ways that consumers wouldn't expect or understand raises transparency concerns.
The direction is clear: the ACCC expects businesses to be transparent about AI use, especially when it directly affects customer interactions or purchasing decisions.
Privacy Act Disclosure Requirements
Separate from consumer law, the Privacy Act creates its own disclosure requirements when AI processes personal information. Under APP 1 (open and transparent management) and APP 5 (notification of collection), businesses must inform individuals about how their personal information is collected, used, and disclosed.
If your AI system processes personal information — customer names, contact details, purchase history, browsing behaviour, or any other personal data — your privacy notices need to describe this processing. This means updating your privacy policy and, in many cases, providing specific notices at the point of interaction.
Industry-Specific Rules
Some industries have additional disclosure requirements that go beyond the general consumer law and privacy obligations:
Financial Services
ASIC's regulatory guidance requires financial services providers to be transparent about the use of automated decision-making, including AI, in credit assessments, insurance underwriting, and financial advice. Disclosure obligations are more stringent in this sector.
Healthcare
Healthcare providers using AI for diagnosis, treatment recommendations, or patient triage face heightened disclosure obligations under health records legislation and professional codes of conduct. Patients have a right to know when AI is involved in their care.
Legal Services
Legal practitioners using AI for document review, legal research, or drafting must consider their professional obligations around competence, supervision, and client communication. Most state law societies recommend disclosure of AI use to clients.
Telecommunications
The ACMA has issued guidance on the use of AI in customer service for telcos, including expectations around identifying AI-powered interactions and providing pathways to human agents.
Best Practices for AI Disclosure
Regardless of what the law strictly requires, best practice points overwhelmingly towards transparency. Here's our recommended approach, based on our experience helping Australian businesses deploy AI responsibly:
1. Default to Disclosure
When in doubt, disclose. The downside risk of unnecessary disclosure (customers learn you're using AI — which most will find unremarkable) is far less than the downside risk of non-disclosure (customers feel deceived, regulatory action, reputational damage). Transparency is almost always the lower-risk path.
2. Make Disclosures Clear and Accessible
A disclosure buried in paragraph 47 of your terms of service is not meaningful disclosure. Put AI notices where customers will actually see them:
- At the start of chatbot conversations: a simple message like "You're chatting with our AI assistant. A human agent is available if you'd prefer."
- On content pages: a small note indicating that AI assisted in creating the content.
- In decision notifications: when AI has contributed to a decision affecting the customer (e.g., credit assessment), disclose the AI involvement and provide information about how to request a human review.
- In your privacy policy: a dedicated section on AI processing of personal information.
3. Offer Human Alternatives
Wherever possible, give customers the option to interact with a human instead of (or in addition to) AI. This is both good practice and increasingly expected by regulators. A customer who prefers to speak with a person shouldn't have to fight through an AI gatekeeper to do so.
4. Explain What AI Does, Not Just That It Exists
Simply saying "we use AI" isn't particularly helpful. Good disclosure explains what the AI does and what role it plays. For example: "Our AI assistant can answer questions about our products, check order status, and help with returns. For complex issues, it will connect you with a team member." This sets expectations and builds confidence.
5. Be Honest About AI Limitations
Don't oversell your AI's capabilities. If your chatbot can handle common questions but struggles with complex issues, say so. If your AI-generated content is reviewed by humans before publication, mention that. Honesty about limitations actually increases trust rather than diminishing it.
What Customers Actually Think About AI Disclosure
You might be worried that disclosing AI use will put customers off. The research suggests otherwise. Australian consumer surveys consistently show that customers care more about the quality of service they receive than whether it comes from a human or an AI. What they do care about is being deceived.
Customers who discover they were unknowingly interacting with AI feel betrayed. Customers who are told upfront that they're interacting with AI and then receive good service feel positive about the experience. Transparency doesn't hurt your customer relationships — deception does.
There's even evidence that AI disclosure can improve customer perception of your business. Customers interpret proactive disclosure as a sign of a forward-thinking, innovative company that respects their right to know. It becomes a positive brand signal rather than a liability.
A Practical Disclosure Framework
Here's a simple framework you can use to determine your disclosure approach for any AI deployment:
Does the AI interact directly with customers?
If yes, disclosure is strongly recommended and likely legally required under the ACL. Identify the AI as AI at the start of every interaction.
Does the AI make or influence decisions affecting customers?
If yes, disclose the AI involvement and provide information about how to request a human review of the decision. This is critical for financial services, insurance, and employment decisions.
Does the AI process personal information?
If yes, update your privacy policy and collection notices to describe the AI processing. This is a Privacy Act requirement regardless of whether customers interact with the AI directly.
Does the AI generate content that could be attributed to humans?
If yes, consider whether non-disclosure could mislead customers. AI-generated reviews, testimonials, or expert content should be labelled as AI-generated.
Is the AI purely backend with no customer-facing impact?
If the AI handles internal processes without touching customer data or interactions, disclosure is less critical. But even here, mentioning AI capabilities on your website can be a positive differentiator.
What's Coming Next: Regulatory Direction
The Australian Government has been developing its AI governance framework, and more prescriptive disclosure requirements are expected. The voluntary AI Ethics Framework that Australia initially adopted is likely to evolve into more binding regulations, particularly for high-risk AI applications.
The likely direction includes mandatory disclosure for AI-driven decisions that significantly affect individuals, clearer labelling requirements for AI-generated content, and potentially an AI register for high-risk applications. Businesses that adopt transparent practices now will have a head start when these requirements become mandatory.
Transparency as a Competitive Advantage
We understand that disclosure can feel risky. You might worry that customers will trust your AI-powered services less if they know AI is involved. But our experience working with Australian businesses tells a different story.
Businesses that are upfront about their AI use consistently report positive customer responses. Customers appreciate the honesty. They value the innovation. And they trust the business more because of the transparency, not less.
The businesses that will face problems are the ones that hide their AI use and get caught. A customer who discovers they were unknowingly assessed by an AI, or that the "expert advice" they received was AI-generated, will feel deceived. That's the outcome you want to avoid.
In the end, AI disclosure isn't just about legal compliance. It's about respect for your customers, trust in your brand, and positioning your business on the right side of a rapidly evolving regulatory landscape. The businesses that embrace transparency now will be the ones that customers and regulators trust tomorrow. Our responsible AI principles guide how we approach this for every client engagement.
Need Help Getting AI Transparency Right?
We help Australian businesses deploy AI with proper disclosure and transparency built in. From chatbot design to privacy policy updates, we make sure your AI meets both legal requirements and customer expectations.