Safer Internet Day 2026 takes place on Tuesday, 10 February, and this year’s theme hits close to home for every UK business: “Smart tech, safe choices – Exploring the safe and responsible use of AI.”
If you’re running a business right now, AI isn’t some futuristic concept. It’s in your email systems filtering spam, in chatbots on your website, and embedded in your Office 365 subscription whether you’ve activated it or not. Your team is probably using ChatGPT to draft proposals or Copilot to analyze data.
The question isn’t whether AI affects your business. It’s whether you’re using it in a way that protects your business and your partners.
AI Is Already in Your Business
You don’t need to be using cutting-edge AI tools to have AI in your business. It’s already there.
Your email security system uses AI to detect phishing. Your accounting software uses machine learning to flag unusual transactions. Your website analytics uses AI to identify patterns. Microsoft 365 has AI-powered features in Word, Excel, and Outlook whether you realized it or not.
The newer development is that your team has almost certainly started using generative AI tools directly – ChatGPT, Claude, Copilot, Gemini. These aren’t niche tools anymore. They’re becoming as common as Google search was fifteen years ago.
That’s not necessarily a problem, but it does require some thought about how it’s being used.
Where the Real Risks Are
Most businesses worry about sci-fi scenarios. The actual risks are more mundane and much more likely.
Data leakage through AI tools. Your team member copies a client proposal into ChatGPT to improve the writing. That proposal might now be in OpenAI’s training data. Someone pastes customer email addresses into an AI tool. Those addresses have just left your control.
Overreliance on AI accuracy. AI tools are impressive, but they’re not always right. They generate confident-sounding answers that can be completely wrong. If your team is using AI to draft contracts or create financial reports without proper verification, you’re creating liability.
Unvetted AI decisions affecting people. Using AI to screen job applications or assess employee performance sounds efficient. But if the AI has bias in its training data, you’re potentially breaking discrimination laws without knowing it.
Shadow IT through AI subscriptions. Your team might be paying for AI tools on personal credit cards, using them for work, and creating data governance nightmares you don’t know about. It’s the modern version of bring-your-own-device.
These aren’t theoretical. We’re seeing businesses deal with these issues right now.
What Safe AI Use Looks Like
Safe AI use in business isn’t about avoiding AI completely. AI tools can make your business more efficient when used properly. It’s about having clear guardrails.
Know what’s being used. Have a conversation with your team about which AI tools they’re using and for what purposes. You can’t manage what you don’t know exists.
Establish clear data boundaries. Some information never goes into an AI tool. Customer data, financial records, confidential partner information, anything covered by GDPR or contracts. Make this explicit.
Verify AI outputs. Anything generated by AI needs human review before it goes out the door. AI is a drafting tool, not a final answer. This should be policy, not just good practice.
Use business-grade AI tools. Consumer versions often use your inputs for training. Business versions (Microsoft Copilot for Business, Claude for Business) typically don’t. The cost difference matters less than the data protection difference.
Check your contracts and insurance. Does your professional indemnity insurance cover AI-assisted work? Do your client contracts address AI use? Your solicitor should be answering these questions, because the legal landscape is still developing.
The GDPR and Email Security Angle
UK businesses are still bound by GDPR, and AI creates specific considerations.
If you’re using AI to process personal data (customer emails, names, purchase history all count), you need a lawful basis. You also need to be able to explain what you’re doing with that data. “We fed it into ChatGPT” isn’t going to satisfy the ICO if there’s a complaint.
You’re also responsible for AI decisions that affect people. If an AI tool rejects a job applicant or flags a customer as high-risk, you need to explain why and demonstrate it wasn’t discriminatory. “The AI decided” isn’t a defense.
The good news is that addressing GDPR compliance for AI isn’t drastically different from general GDPR compliance. It’s about data minimization, purpose limitation, and accountability. Most businesses we work with already have decent GDPR foundations. It’s just about extending those same principles to AI tools.
On email security specifically: AI has changed the threat landscape significantly. Phishing emails used to be easy to spot with poor grammar and generic greetings. AI has made them dramatically more sophisticated. Attackers can now generate perfectly written, personalized emails that reference real people in your organization and real projects you’re working on.
For your business, this means traditional security awareness training needs updating. “Look for spelling mistakes” isn’t useful advice anymore. “Verify unusual requests through a second channel” still is.
We’re also seeing more sophisticated business email compromise attempts where attackers use AI to impersonate executives convincingly. If you’re not already using email authentication (SPF, DKIM, DMARC), that’s becoming more critical, not less.
Getting Your Team on Board
Sending a long policy document about AI and expecting people to read it doesn’t work.
What does work: a 15-minute team conversation where you acknowledge that AI tools are useful, explain the few specific things that shouldn’t go into them, and ask if anyone has questions. Make it a discussion, not a directive.
Give your team clear examples. “Don’t put client proposals into ChatGPT, but using it to draft blog posts is fine.” “Don’t upload our customer database to any AI tool, but using Copilot to analyze anonymized sales trends is okay.”
The goal isn’t to shut down AI use. It’s to channel it in safe directions. Your team will follow guidelines that make sense and that they understand. They’ll work around guidelines that feel arbitrary.
Is This Urgent?
Probably not urgent, but definitely worth addressing soon.
If your team is using AI tools for work (and statistically, they are), having a conversation about safe use this month is smarter than having it next year after something’s gone wrong.
If you’re in a regulated industry or handle sensitive data, this moves higher up the priority list. If you’re a small business with basic IT needs, it’s still worth a conversation but it’s not a crisis.
The risk isn’t that AI is dangerous. The risk is that it’s useful enough that people will use it regardless of whether you’ve thought through the implications.