AI Ethics Policy

We believe AI should be a force for good — transparent, fair, accountable, and firmly in service of the people who use it.

Last updated: 28 February 2026

At Origentai, building AI solutions isn't just a technical exercise — it carries genuine responsibilities. Every automation we design, every AI system we integrate, and every recommendation we make has the potential to affect real people's lives and livelihoods. This policy sets out the principles that guide how we develop and deploy AI responsibly.

This policy applies to all AI systems, automation workflows, and integrations that Origentai designs, builds, operates, or recommends on behalf of our clients.

How We Build Responsibly

1. Transparency

We are open and honest about when and how AI is being used. We clearly communicate to clients and their customers what is automated, what decisions AI influences, and what the limitations of those systems are. We do not create AI systems designed to deceive users into thinking they are interacting with a human when they are not.

2. Fairness & Non-Discrimination

We actively work to identify and mitigate bias in the AI systems we build. Our solutions must not discriminate against individuals on the basis of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation. We regularly review AI outputs for patterns of unfair treatment.

3. Human Oversight & Control

AI augments human decision-making; it does not replace human judgement on matters of significance. All AI systems we build include clear mechanisms for human review, override, and intervention. We never design systems that remove meaningful human control over consequential decisions — especially those affecting people's livelihoods, safety, or wellbeing.

4. Privacy & Data Protection

We apply privacy-by-design principles to every system we build. AI solutions are designed to use the minimum personal data necessary to achieve their purpose. We ensure data used to train or operate AI systems is handled lawfully, fairly, and transparently, and that individuals' rights under UK GDPR are preserved. We do not use client or customer data to train third-party AI models without explicit, informed consent.

5. Accountability

We take responsibility for the AI systems we build. We document the purpose, limitations, and expected behaviours of every system we deploy. Where problems arise — whether technical failures, unexpected outputs, or harmful consequences — we acknowledge them promptly, take corrective action, and communicate transparently with affected parties.

6. Safety & Reliability

We design AI systems to be robust and to fail safely. We test systems thoroughly before deployment and establish monitoring to detect unexpected behaviours in production. We implement appropriate guardrails to prevent AI systems from taking actions that could cause harm, financial loss, or reputational damage to clients or their customers.

7. Beneficial Use Only

We will not build AI systems intended to deceive, manipulate, harass, discriminate against, or cause harm to individuals or groups. We will not assist in the creation of AI-generated disinformation, mass surveillance systems, or tools designed to circumvent legal protections. If a client request conflicts with these principles, we will respectfully decline.

8. Explainability

Where AI systems influence decisions that affect people, we aim to make those decisions understandable and explainable in plain language. Clients should always be able to understand, at a high level, why an AI system produced a particular output or recommendation — and what they can do if they disagree with it.

9. Environmental Responsibility

AI has a real environmental footprint. We design solutions to be computationally efficient — using appropriate model sizes for the task, avoiding unnecessary processing, and preferring providers committed to renewable energy where possible. We aim to achieve the desired outcome with the minimum environmental impact.

10. Continuous Improvement

The AI landscape evolves rapidly. We commit to reviewing and updating this policy regularly to reflect new understanding, emerging best practices, and changes in regulation. We engage with developments in AI ethics, policy, and technology — and we listen to feedback from clients, customers, and the wider community to improve our approach.

Regulatory Compliance

We operate within the framework of applicable UK and international law, including but not limited to:

  • The UK GDPR and Data Protection Act 2018
  • The EU AI Act (where applicable to our EU-facing clients)
  • The UK Government's AI Safety Framework
  • Relevant sector-specific regulations affecting our clients' industries

We monitor regulatory developments in AI governance and proactively adapt our practices to remain compliant.

Raising Concerns

If you have a concern about an AI system that Origentai has built or operates — whether about fairness, privacy, safety, or any other ethical dimension — we want to hear from you. Please contact us:

Email: [email protected]
Online: Use our contact form

We will acknowledge all concerns within 5 business days and investigate thoroughly and transparently.

Changes to This Policy

We review this AI Ethics Policy at least annually — or sooner if significant developments in AI technology, ethics, or regulation require it. When we make material changes, we will update the "last updated" date at the top of this page.