Ella Broadbent, Senior PR Consultant at Petal & Co
AI is quickly becoming one of the most talked about topics from news headlines to boardrooms. And with AI trends evolving daily, every business function must adapt to the new world we find ourselves in. But amidst this excitement, we must remember that safe and responsible AI use must be a priority.
With the staggering potential of AI, it is critical that businesses don’t go off the rails. Three quarters of UK mid-sized businesses are already ahead of their 2025 growth targets, with many attributing this to their use of AI. In the North West specifically, more than half of companies are investing in AI, automation or similar technologies to support their expansion. Using AI can be transformative, but we must get it right.
Combatting AI risks: creating guardrails
Using AI strategically means considering the risks it poses and taking measures to mitigate them. Creating guardrails that can be rolled out across your organisation is a brilliant first step that immediately starts to add an element of control to such a fast moving technology. Within those guardrails, there are several things businesses might want to consider:
- Communication of AI adoption
While it may excite many, AI adoption could be viewed as a potential threat. Employees may not trust AI outputs, be hesitant to change, or might even be worried that AI will replace their job. Meanwhile, clients and customers may be concerned about the quality of outputs if AI has been used, or data privacy. Critically, organisations must communicate their planned integration of AI to their workforce and customer base clearly and consistently.
Creating an AI positioning statement that clearly outlines why and how the company is adopting AI, and the guardrails in place. This will help ease people’s minds and also hold leadership accountable for responsible adoption.
- Data privacy
Inputting sensitive data or content into AI models without consideration can jeopardise data privacy. The implications are wide ranging – from damaging customer or stakeholder trust, to potential legal implications. Data security is particularly important when working in sectors that hold lots of sensitive data from customers or clients, such as healthcare, financial services or even PR.
When inputting any information into AI, always anonymise company or personal names, statistics or financial information. Another less commonly used tactic is changing the privacy settings on the AI tool itself. For instance, turning off ‘improve model for everyone’ in Chat GPT’s data controls stops it using your data to train the system. Frequently deleting, not just archiving, all chats also prevents data being stored on the platform and being left open to potential security breaches and cyber attacks.
- Data bias and quality of outputs
AI can reflect and reinforce existing biases if not carefully monitored. It can also create ‘hallucinations’, unreliable and inaccurate data or statements that it simply makes up. This makes it crucial to carefully check the outputs of AI platforms before using them, checking for bias or mistakes. AI should be used as a tool to support, not replace human thought and creation.
- Foundations
Whatever industry is using AI, and regardless of the tools being adopted, certain foundations are needed. AI is not plug and play. It needs accurate and organised data, strong security measures, and staff are trained to some degree to use AI responsibly. Businesses should consider what data is being used in AI programmes, and if using more complex programmes, how that data is set up and organised to ensure the most accurate outputs. With the rapid pace of change, it is also important to hold regular team training sessions on AI to ensure everyone has the most up to date practices.
Microsoft’s Guiding Principles
Microsoft has created a set of Guiding Principles to help organisations adopt AI responsibly. When creating your own company AI guardrails, consider whether they address these key areas:
1. Accountability
AI must be developed and used with clear responsibility. Those creating and deploying systems should be answerable for their decisions and outcomes. Governance frameworks, audits, and mechanisms for redress are essential.
2. Transparency
AI processes should be understandable to users and stakeholders. Be clear about how data is used, how decisions are made, and where limitations lie. Openness builds trust and allows people to make informed choices.
3. Inclusivity
AI should reflect the diversity of the societies it serves. Involving a wide range of voices during design and deployment helps ensure equitable access and reduces bias.
4. Reliability
AI needs to be technically sound and deliver consistent results in real-world settings. Regular testing and monitoring help safeguard against errors and system drift.
5. Privacy
Protecting personal data is central to ethical AI. Build systems that prioritise privacy by default, using approaches such as data minimisation, encryption, and secure storage. People should have clear rights over how their data is collected and shared.
6. Well-being
AI should contribute positively to human welfare. It must avoid causing harm, whether through manipulation, exclusion, or negative effects on mental health.
Consistency is key
When AI is applied in different ways across a team, the result is often uneven work and unclear standards. It also makes it harder to ensure the technology is being used safely and responsibly. By putting guardrails in place and building a reliable AI toolkit, teams can create a foundation for quality and trust. The toolkit shouldn’t be static – reviewing and refining it as new tools and models emerge will keep the approach relevant and effective.