Assurance-first for AI systems

The risks of using AI in production systems are well-known (even as we are all try to understand them better) – hallucinations, intellectual property, bias, etc. But FOMO pressures enterprises to get on and start deploying such systems.

How do you square that circle? How can an organisation start building the capabilities they need, without letting caution drag all projects to a halt.

Adopting an assurance-first approach to system lifecycle will give a business the confidence to move fast when fast is needed and to slow down when care is appropriate. In other words, build into your lifecycle a balanced set of controls from day one, steps that allow your people to make good decisions at each stage – from proof-of-concept to procurement and development to implementation and ongoing maintenance.

The main risks in AI usage are ongoing, they don’t disappear during project testing. By their nature AI systems are dynamic, ongoing activities are going to be required to ensure that the system continues to provide accurate, unbiased outcomes. Baking these assurance activities in from the start means that your organisation is lifting its risk management maturity and capability in lockstep with new functionality

So, what does this look like – here’s some suggested steps:

1.     Start building an inventory of AI usage – make it mandatory for new system/process proposals to be transparent about AI usage – including up and down the supply chain.  This should include systems with no AI component – because the supply chain may change over time.

2.     Provide an assessment framework that quickly qualifies the risk – ie how sensitive is the data being used, what is the impact if it goes wrong. There will be plenty of AI usages where the risks are quite acceptable – but there should be somewhere the ‘likelihood x impact’ requires extra scrutiny.

3.     Where needed build AI testing and review into the project plan – how are you going to confirm that outcomes are accurate and unbiased?

4.     Adapt that testing into a recurring assurance process that can be repeated at regular intervals.

5.     Embed that retesting into your ongoing review processes.

The ultimate question is: How can our Board have ongoing confidence that the benefits of using each AI tool is being appropriately weighed against the risk. An Assurance-First approach provides an answer.

Previous
Previous

Super meltdown! Think of the children!

Next
Next

Death of a Techno Vision (Pro)