Somewhere in your business right now, someone has an AI tool open. They're drafting an email, or summarising a document, or trying to figure out why a report doesn't balance. They probably didn't ask anyone before they started using it. And honestly, they're probably getting some pretty good results too


That's the hard part. AI works

It works, it's fast, your team are experimenting, and the business needs them to learn. And often you're handling information that isn't yours. Client financials, employee records, contracts, correspondence. And that AI tool your team just pasted commercial data into? You might not know what happens to it next. Whether it gets used to train the model. Whether it sits on a server somewhere you can't see. Whether the company behind the tool even has a clear answer on that

If that makes you uncomfortable, good. It should. Not because you've done something wrong, but because you're feeling the tension that every business owner is sitting in right now, whether they've named it or not

The bind

On one side, you need to move. AI is genuinely changing how work gets done. If your people aren't learning to use these tools, they're going to fall behind, and so is your business. You can feel the pace of it. The pressure to adopt, to experiment, to keep up

On the other side, you need to be careful. You've got professional obligations. You've got client trust that took years to build. You've got a reputation that's worth more than any efficiency gain

Speed versus care. Growth versus governance. Letting your people experiment versus protecting what isn't yours to risk

The truth is that most of us are trying to do both at once, and finding that the balance point keeps shifting

You build a policy. A few months later the technology has evolved, and the policy has gaps. You put controls in place. Your team finds use cases you hadn't imagined. You choose a tool. The company behind it changes its terms, or its values, or its leadership. The governance you just finalised is already out of date

That's the reality we're in. It's not a single decision you make and move on from. It's a continuous conversation - with your team, with clients and with yourself - about what's ok and what's not ok. And the ground under that conversation won't stop moving. And we need to keep up

If you're in Australia, the rules are still being written

Unfortunately, Australia doesn't have dedicated AI legislation. The government's National AI Plan, released in December 2025, confirmed it won't introduce a standalone AI Act for now. Instead, it's relying on existing laws (primarily the Privacy Act) and voluntary guidance. The National AI Centre published six essential practices for responsible AI adoption in October 2025, but they're not mandatory. An AI Safety Institute is being spun up through 2026

What the OAIC has made clear, though, is that the Privacy Act already applies. If your team puts personal information into an AI tool, the Australian Privacy Principles apply to that data. And here's the part that can catch people off guard - if the AI generates personal information, even if it's wrong and hallucinated, that's still considered personal information under the law. You're responsible for how it's handled

New transparency obligations around automated decision-making take effect in December 2026. The ACCC found that 83% of Australians believe companies should seek consent before using their data to train AI. The OAIC's own research found 84% of Australians want more control over their information. And KPMG's 2025 trust study found that 70% of Australians think AI regulation is necessary, while only 30% believe current laws are enough

So the expectations are high, regulation is tightening, and most small businesses are somewhere between experimenting and hoping for the best. Deloitte found that while two-thirds of Australian SMBs are using AI, only about 5% have the strategy, training, and systems to use it well. The government's own AI Adoption Tracker shows a persistent gap between what SMEs say they'll do on responsible AI and what they've actually done

None of that is to make anyone feel bad. It's hard. Technology moves fast, guidance is still forming, and most of us are running our businesses at the same time as trying to figure this out. The gap between intent and action is where real risk lives. Reputational risk, legal risk, and the quiet kind of risk where you lose a client's trust and don't find out until it's too late

What we've landed on so far

If you don't know us, we're a bookkeeping and payroll firm. We handle the financial data of hundreds of Australian businesses. And like others we are bound by more than good intentions. As registered BAS agents we operate under the Tax Agent Services Act, the TPB Code of Professional Conduct, and the Australian Privacy Principles. Those obligations exist to protect the people who trust us with their information. So when AI showed up in our workflows, the tension wasn't abstract. It was immediate. I want our team to learn and grow. We need to protect information that isn't ours. And the professional framework we operate under means we can't afford to figure it out as we go

I think that's true for most businesses right now, even if you're not in a regulated profession. If you're handling client data, employee information, or anything that belongs to someone else, you're carrying the same weight. Whether there's a code of conduct attached to it or not

For us, we started with the basics. (If you want the broader context on what AI means for accounting and bookkeeping firms, our AI in accounting guide covers the landscape, and our piece on whether AI will replace bookkeepers addresses the workforce question directly.). Early conversations with the team about what's ok and what isn't - "Don't paste client data into public AI tools. Think before you input. Consider who owns the information. Redact personal details." The obvious stuff

But that was just the beginning. Over time we've developed governance policies, put technical controls in place, trained the team on responsible usage, and we keep coming back to it as things change. And they keep changing

Through the process, we've ended up with four questions we ask of any AI tool before it comes into our workflows. They're simple, but they force conversations that are easy to skip, yet important to face

Where does the data go?

Not what the marketing says, what the privacy policy says. What happens to the data you give it? Is it used to train the model? OpenAI retrains ChatGPT on your conversations by default and you have to opt out. Anthropic, who built Claude, doesn't use your data for training unless you explicitly consent. That's a meaningful difference when you're handling payroll records and bank transactions

What does this tool do when it doesn't know the answer?

This is an important distinction. Some models will tell you they're unsure. Others will give you a confident, polished answer that's completely wrong. When you're in a profession where incorrect advice or a misapplied tax ruling has real consequences, and your name is on the lodgement, you want the tool that says "I'm not sure" rather than makes something up. A joint safety evaluation between Anthropic and OpenAI in 2025 demonstrated the importance of stated uncertainty

What does the company behind this tool do when it gets hard?

This is where you learn something real. In February 2026, Anthropic told the US Pentagon that Claude couldn't be used for autonomous weapons or mass surveillance of American citizens. The Pentagon wanted unrestricted access. Anthropic refused and lost a $200 million contract. OpenAI took the deal. Google took the deal. And while indicative of values, Anthropic is not perfect. They softened their own safety policy in the same month. They're a commercial company with commercial pressures. But when a specific request came to use their technology for something they weren't comfortable with, and there was money on the table, they said no. And that means something. You can't evaluate every technical decision an AI company makes. But you can watch what these businesses do as a signpost to what they value

Does this choice align with how we want to run our business?

We're a BCorp. Its a committment we make, and a framework for contemplating how our decisions affect everyone around us. Workers, clients, community, environment, governance. When we look at AI through that lens, the questions come naturally. Are we using this to replace people, or to give them space for better work? Are we transparent about where AI touches what we deliver? Have we actually built good governance, or are we just talking about it?

Why values matter more than you think right now

Here's what I've come to believe. When technology changes faster than any individual can track, and regulation is still catching up, and best practices are being rewritten every few months - the thing that actually guides your decisions isn't a policy document. It's your values

I don't mean values in the corporate statement sense. Values as in - what do you actually care about? What won't you compromise on? What would you choose if nobody was watching?

If you've spent years building trust with your clients, that trust should inform which AI tools you put in your team's hands. If you believe in transparency, that belief should show up in how you talk to clients about where AI touches their work. If you care about your people, your AI adoption should make their work better, not just cheaper

The questions you should ask of yourself before implementing any AI tool within your business are - Is it ethical? Is it fair? Does it deliver value? What's the cost of delivering that value? Those are a great place to start

Cisco's 2025 benchmark found that 64% of workers worry about sharing sensitive data with AI tools. Nearly half admit they're doing it anyway. That gap doesn't close with a policy alone. It closes when people understand why the policy exists. When they feel the values behind it. When the conversations are ongoing, not box ticking

We don't have this figured out

I want to be open about that. We've built policies, chosen our tools, trained the team, put controls in place. And I still wonder whether we're keeping up. Whether the governance we wrote a few months ago is still relevant today. Whether we are asking the right questions or just the ones we know to ask

While it feels terrifying at times, I think that's how it's going to be for the foreseeable future, for all of us

The businesses that come through this well won't be the ones that moved fastest or adopted the most tools. They'll be the ones that kept asking the hard questions and didn't stop when the answers got uncomfortable. The ones that treated AI tool selection with the same care they'd give to choosing a banking partner or an auditor - someone you're trusting with things that aren't yours to lose. The ones that act strategically, and with care

Something will go wrong eventually. For everyone. It's the natural and almost inevitable consequence of adopting powerful technology while the guardrails are still being built. The question is whether you did the thinking beforehand. Whether you can look your clients in the eye and explain how you made the decision. Whether when they ask, you've got an answer you can stand by, hand on heart