ETHICRITICAL®

Making AI risk visible in practice.


ETHICRITICAL® is our structured model for assessing AI systems across governance, fairness, security, transparency and other critical areas. It helps organisations identify failures early, before they develop into broader operational or reputational issues. By making risks visible early and explicitly, the assessment supports decisions that can be understood, challenged, and evidenced — and supports the trust that follows.

The Ethicritical sunburst showing the Security, Governance, sustainability,Feedback, Fairness, Transparency and Requirements dimensions, with different colors showing the scale from 0 to 5

ETHICRITICAL® in practice

  • 1. How to read the model

    The inner ring represents the decision layer.


    It provides a consolidated view of the system’s position across key areas, typically used to support senior decision-making.


    The outer ring represents the diagnostic layer.


    Each segment highlights where issues can emerge in practice.

  • 2. What the model tests

    Each area is assessed using a set of operational questions:

    • Where could this fail in practice?
    • What would indicate a problem?
    • Would we recognise issues early enough to act?
  • 3. How results are expressed

    Each area is assessed on a 0–5 scale:


    0 — no evidence

    1 — awareness

    2 — locally managed

    3 — centrally managed

    4 — societal consideration

    5 — societal contribution


    The purpose is not to average results, but to identify the weakest areas.

  • 4. From diagnosis to action

    The assessment produces:

    1. a clear diagnostic baseline
    2. identification of priority risks
    3. a structured improvement plan

    This allows organisations to address isolated failures before they become systemic.

What ETHICRITICAL® is and why it is different


ETHICRITICAL ® is a structured AI assurance service helping organisations examine how responsibly their AI and digital systems are conceived, governed and used in practice, in line with emerging regulatory, governance and assurance expectations.


It goes beyond abstract “AI ethics”. Instead, it focuses on practical operational risks such as data bias, governance weaknesses, accountability gaps and unintended consequences for staff, customers or citizens. ETHICRITICAL ® translates responsible AI into a structured, measured assessment and improvement roadmap that helps organisations surface risks, evaluate controls, and prioritise actions to strengthen alignment with internal and external requirements, including regulatory obligations — providing a consistent way to measure the strength and coverage of their AI risk controls.



Who ETHICRITICAL® is for:

ETHICRITICAL ® is designed for organisations that are already using, or preparing to deploy, AI in operational or decision-making contexts.

  • It is especially relevant to risk, audit and governance leaders seeking structured AI assurance.
  • It supports digital, data and transformation leaders who want to drive AI adoption responsibly and with confidence.
  • It helps leaders gain clarity, structure and evidence around AI risk and responsibility.
  • It is suited to both public sector bodies and regulated private organisations, where AI use can directly affect stakeholder trust and organisational reputation.
  • It is aimed at organisations that need to demonstrate a robust ethical-by-design approach in a straightforward way.
image showing a chain including a breaking weak link

Benefits of ETHICRITICAL®:


ETHICRITICAL ® provides a structured way to measure and understand AI risk, rather than relying on assumptions or informal judgement.

  •  It helps organisations surface blind spots in governance, data bias, accountability and decision-making.
  • It supports stronger alignment with internal controls and external regulatory expectations without  undermining innovation.
  • It gives leaders a clear, prioritised improvement roadmap grounded in evidence, not opinion.
  • It strengthens stakeholder trust and organisational reputation in how AI is deployed and governed.
  • It enables more confident, evidence-based decisions at board, risk and programme level.


If you would like to explore how ETHICRITICAL ® could apply to your organisation or AI initiatives
please get in touch
to arrange a no-obligation discussion.



It's all about trust!



Interested in adopting an "Ethical by design" approach? We are here to help!


Contact us to discuss how our detailed and comprehensive Ethical Impact Assessment will help reduce your reputational risk and improve trust in your brand.

Book an appointment