Every organisation has a culture with shared behaviours, values and beliefs. Whether the culture is deliberately and thoughtfully designed or not, it is directed by the people leading the business and influences how every individual within the organisation operates and behaves.

In addition, the organisation and people are subject to a hierarchy of other requirements and controls, such as the law, industry regulations, relevant standards and schemes, and company policies, rules and values. These layers of compliance combined with culture effectively govern human actions and processes.

If you are now engaged in making judgments about the impact of using AI to complete human tasks and functions on your behalf, one way of grappling with the task is to think about AI as an employee and ask the question:

“Is the AI within my organisation governed in the same (or similar) way as my employees to mitigate risk and avoid conflicts with culture and compliance?”

Compliance layers for AI

Any AI you adopt needs to be consistent with your culture and be able to demonstrate compliance just like a human can.

Have a strategy – know what ‘good’ looks like!

Good leadership will ask questions of what AI technology can do for their business and not passively bow to it because it’s the new shiny thing. It is unlikely that you would design your organisation around your latest recruit so ask whether the technology will bend to what your customers need and want. Determine whether AI can deliver excellence for your customers, make your organisation more efficient, and still be governable, ethical, and compliant.

Develop a Data and AI strategy and make sure to evaluate technology carefully to ensure you’re choosing components within your IT estate that are configurable, adaptable, scalable and not constrained. Be aware that if you operate in a highly regulated industry, that standard OOTB (out of the box) tech might open-up exposure to risk – you don’t want AI to break the law on your behalf!

Govern by design

All businesses have a set of rules for employees to follow and similarly, it is good practice to design rules, also known as AI Guardrails into your AI tech. These checks and balances are designed to set the parameters and boundaries for your Generative and other AI systems to work within. Not unlike stair safety rails these guardrails stop the AI from going off track preventing it, in certain circumstances, from making bad decisions, misleading the customer or acting in unexpected ways that exceed the designed scope.

For example, in large language models that work with different languages even the most accurate translation may not necessarily be a legally compliant one. AI Guardrails are vital to ensure consistent compliance and reduce risk. Even better, well-designed AI Guardrails will allow AI to challenge the rules and suggest new or optimal ways of doing things, but not to break them by default.

Be very clear on what you want AI to do, including the scope and the extent of the roles it will perform on your behalf. Where does the technology start and stop and how, as a manager or senior leader, do you know if the AI is doing what was asked of it? If there is a possibility of a decision or judgment being made about a person or group of people, then has an impact assessment been carried out? Is there a robust process of human review and appeal? Can you also explain in plain English to someone how any decision was made and justify its fairness?

Human control

Design in very clear feedback loops and forms of control to know for sure that the AI is being effective and working within the parameters you have set. For example, take compliance with automated data processing and GDPR. Can the bot respect that an individual might have chosen to opt out of automated data processing?

Imagine you were the manager of a large AI system that was deployed to do chasing tasks or similar where automated processing was required. What evidence will you have to know that the AI is respecting a customer opt out and is working correctly? In a good scenario you would expect the system to throw out a list of exceptions where it couldn’t complete the task because the individual customers had opted out and the AI was obeying the regulation. Building in accountability via audit trails and logs is a vital part of an AI service design.

Risk Management

Avoid the temptation to kneejerk react to the “We need AI now!” message from the board. It may be a better option to act strategically first and for your business to determine what the right the blend of human and AI technology looks like.

Next, develop a robust Data Governance and Risk Management plan to determine how it will all be delivered, scaled, and governed together. Notwithstanding that some of these technologies provide a perfect environment to test and learn quickly and efficiently so you can be cautious but not complacent.

Ultimately, if you want an organisation that delivers a great product/service and excellence using AI and automation, you have to design it that way. Great customer experiences rarely curate themselves.

And if you find yourself giving your AI bots and widgets virtual praise for all the time and costs they are saving you, or the new revenues they are enabling, it won’t do them any harm.

Damon Harding
Latest posts by Damon Harding (see all)