When models turn bad – AI hygiene keeps organisations on the right side of regulation

By Emily Steen, AI Solutions Developer, Thrive.

AI is a powerful tool that can accelerate the achievement of business goals, but it comes with significant risks around security and accuracy.

 

Whether organisations entirely rewire their operations or use AI to optimise existing processes, they need to address security and accuracy to ensure they remain compliant with regulation and obtain the ROI they seek. An AI product that returns unreliable results impairs decision-making, with consequences compounding over time.

 

Awareness of these pitfalls is growing. A survey by management consultants McKinsey, for example, found more than 40% of respondent companies are actively mitigating inaccuracy in generative-AI, and nearly as many are addressing cybersecurity.

 

The main steps organisations should take can be summed up in the term “AI hygiene”. This is an important set of practices and principles. It is the discipline of putting the right processes in place, so AI products deliver accurate results without introducing organisational risk. Organisations must design systems that solve a problem or address a business aim while safeguarding security and privacy.

 

From a cybersecurity perspective, AI hygiene means all AI applications must align with an organisation’s compliance obligations and security standards. In a sector such as finance, these can be highly complex and overlapping.

 

Audits are essential, as in many areas of financial regulation. Organisations need to keep auditable histories of AI interactions and decisions and to impose access controls. Internally, these access controls must mirror what a user can already see in systems such as SharePoint or OneDrive.

 

If the settings means an employee cannot see a file when logging in directly, they should not see it via an AI assistant. Poor access controls are known as a point of data-protection failure when using AI tools. It is too easy for misconfigured deployments to expose sensitive data such as payroll and HR information. Robust role-based access, logging, and careful configuration close those gaps.

 

Data quality is the foundation of AI

 

High levels of accuracy are not a one-off achievement – they depend on expert implementation and ongoing feedback loops that monitor outputs and enforce quality thresholds as usage grows.

 

Yet no matter how good internal processes are, they will fail to increase accuracy and performance if an organisation’s data quality is poor. An AI system is only as good as the data it uses.

 

For structured data (databases and warehouses), established practices already exist to assure timeliness, completeness, accuracy, uniqueness, and consistency. These controls should be applied rigorously.

 

The challenge with data quality often lies with unstructured data such as documents, PDFs and slides. This is where many firms are now building on existing data quality practices to bring greater consistency to unstructured data. To be effective it requires the implementation of clear rules to ensure that organisations are only feeding up-to-date data into their AI model.

 

Versioning control needs to be in place, along with limits on the scope of data accessed so no preliminary drafts of documents are used, for example. The model should only work with data required for the specific use case, rather than running riot through systems.

 

Treat AI like a team member

 

AI in many ways should be treated as an employee. Models should access only the data and permissions that a comparable team member would have. For example, an analyst agent supporting an investment team in the private equity sector should be limited to the sources analysts use and are authorised to view. Assigning a clear human owner to an agent is also beneficial, helping maintain a strong feedback loop and flagging up any adjustments that are needed.

 

Custom-built AI agents

 

Rather than a single, catch-all chatbot, organisations are likely to achieve the highest levels of security, accuracy and internal adoption with purpose-built agents. These agents can share reusable components and approved API or MCP (model context protocol) connections (for example, to external subscribed services) to avoid duplicated effort while tuned to their distinct tasks. Encryption and API keys provide security. These controls can be tightened or relaxed per use case, balancing accessibility with protection.

 

To enforce data access control organisations can use SSO protocols, while they can manage access to agents through role-based approaches. All of these mechanisms should be fine-tuned to keep the system aligned with organisational policies and audit needs.

 

Collaboration to achieve optimum AI performance

 

Collaboration is essential in AI products that solve very specific business needs – especially in early development phases. Product teams need access to expertise in mapping end-to-end workflows, data dependencies, and decision logic at a detailed level.

 

Organisations must accept that security, accuracy and performance require continuing commitment and the implementation of AI hygiene principles. This is how AI becomes and remains productive, governed and fully aligned to generate maximum ROI securely.

By Andre Jay, Director of Technology at Warp Technologies.
By Rob Gates, Chief Architect & Innovation Officer at Duco.
By Danny Quinn, Managing Director at DataVita.
By Alexander Gittens, Utilities, Energy and Enterprise Sales Manager, Getac.ise Sales Manager,...
By Giuseppe Leto, senior director IT systems business at Vertiv.
By Caroline Fanning, Chief Employee Success Officer at The Access Group.