The adoption curve for AI tools in New Zealand businesses has moved faster than most people’s understanding of what using those tools actually involves. Paste a customer complaint into ChatGPT to draft a response. Upload a spreadsheet of transactions to summarise. Ask a tool to analyse staff performance notes. Each of those actions moves data outside your business and into a system you do not control.
What the Privacy Act 2020 actually covers
The Privacy Act 2020 governs how personal information is collected, stored, used, and disclosed. It applies to any New Zealand business or individual that holds personal information about people, regardless of size. There is no small-business exemption.
Personal information is broader than most business owners assume. It includes names, contact details, and financial information. It also includes any information from which a person can be identified — which in practice means client correspondence, staff records, performance notes, meeting summaries, support tickets, and anything else that references an individual. If you are feeding that kind of content into an AI tool, the Privacy Act is relevant to how you do it.
The Act requires that personal information is collected for a specific purpose, used only for that purpose, kept secure, and disclosed only where appropriate. When you pass personal information to a third-party AI system, you are disclosing it. Whether that disclosure is appropriate depends on your use case, your terms with that vendor, and whether the people whose data it is have any reasonable expectation of privacy.
Four questions to ask before using any AI tool with business data
Not all AI tools carry the same risk. The questions below help identify where the risk actually sits.
Who owns the data after you submit it? Many consumer-grade AI tools have terms that allow them to use submitted content to train or improve their models. That means data you submit does not necessarily stay with you. Business-grade tools from the same providers often have different terms — but the default tier may not.
Where is the data stored, and under which jurisdiction’s laws? Data stored in the United States is subject to US law, including provisions that allow government access under circumstances that would not apply in New Zealand. For most small businesses this will not be a material risk, but for businesses handling sensitive client information it is worth knowing.
Is the tool designed for business use? There is a meaningful difference between consumer AI products and enterprise or business editions. The business editions typically include data processing agreements, clearer terms around data retention and model training, and contractual commitments around security. Those agreements matter if you are ever asked to demonstrate compliance.
What happens if there is a breach? Under the Privacy Act, you are required to notify the Privacy Commissioner and affected individuals if a privacy breach is likely to cause serious harm. If personal information you submitted to an AI tool is exposed through a vendor breach, your notification obligations apply. Knowing in advance which tools hold which data makes that process manageable. Not knowing makes it chaotic.
The difference between consumer-grade and business-grade tools
Several of the most widely used AI tools have distinct consumer and business tiers. The consumer tier is often free or low-cost. The business tier costs more and comes with different commitments around data handling.
For a business processing customer data, staff records, or commercially sensitive material, the business tier is usually the appropriate choice. It typically means your data is not used for model training, it is stored under a data processing agreement that creates accountability, and you have contractual recourse if something goes wrong.
Using the consumer tier for business data is not necessarily a violation, but it is a risk that most businesses are taking without having made a deliberate decision.
What to tell staff before they start
The most practical governance step a small business can take is to set a clear, simple policy before widespread AI tool use begins — rather than trying to retrofit one after the fact.
A basic policy does not need to be long. It should cover what kinds of data can be submitted to AI tools and under what conditions. It should name which tools are approved for business use. It should be clear that personal information about clients, customers, or staff should not go into consumer-grade tools without specific authorisation. And it should include a contact point for questions.
Staff who have a clear rule to follow will generally follow it. Staff operating in ambiguity will make their own judgements, and those judgements will be inconsistent.
A simple starting framework
If you want to create a basic data-use policy for AI tools, a working starting point covers four things: what the approved tools are and why they were chosen, what categories of data can and cannot be submitted, who is responsible for reviewing and updating the policy as tools change, and how staff should raise concerns or seek guidance when they are unsure.
This is not a substitute for legal advice in complex situations. But for most small businesses, having any policy at all puts you ahead of where most businesses currently sit — and it creates the foundation for a more mature governance approach as AI use grows.
The businesses that handle this well are not the ones with the most sophisticated tools. They are the ones that made deliberate decisions about what went into those tools and why.
Want to know more? Contact Us.