Thinking of implementing AI? You already did! In my previous article on AI scaremongering, one of my fundamental points was that the term ‘AI’ itself is unhelpful towards understanding what this technology actually is. One quip goes that the difference between Machine Learning and AI is a 300% price markup, i.e. that much ‘AI’ is merely marketing sophistry in the same way that magically sprinkling ‘blockchain’ onto a company’s value proposition was commonplace a few years ago.

AI is creeping in

While I enjoy a healthy dose of such cynicism, it is a fact that AI use cases are now so commonplace as to be factored into the latest chip architectures such as the new Apple M4 chip and that there is definitely something more going on than Machine Learning or mere marketing spin.

Some people’s perception of AI is coloured by the sudden appearance of this ‘everyday AI’ in their lives in the form of Large Language Models and other Generative AI, and they see it as a threat to their entire means of making a living. It is common to use adjectives like ‘frightening’ to describe AI capabilities or to invoke ‘Skynet’ or other such apocalyptic allusions. As someone with a musical side-hustle, I find a lot of these new tools like Udio and Suno can create music that is of surprising quality even if sometimes it is highly amusing to see what music inspired purely by data reveals about the choices made in human music-making.

And as the debate over the ethics of AI rages on, as lawmakers scrabble to devise a legal framework that could possibly make sense of it all, all the while AI is already here, creeping into the layers of technology we now use every day – and night – in subtle and mostly extremely helpful ways.

Risk management

While it might now be becoming commonplace for most professionals to use AI in one form or another to perform their jobs, it is vital to understand how this impacts our organisations. Rather like the switch to widespread mobile working, it is a change that is happening faster than some organisations would like from a risk management perspective, and secondly it is happening without organisations taking some time to develop a proactive strategy towards it.

The first step is, as before, understanding what this term ‘AI’ really covers. When you next buy a laptop or even a new keyboard or mouse, it will likely have new features optimised toward AI applications. New Windows laptops feature keyboards and now even mice with a “Copilot” key, to trigger swift one-touch access to ChatGPT and other AI services. Having this massive power within such easy reach requires a plan if the most is to be made of the opportunity; the inherent risks understood and mitigated.

AI robot sits at a reception desk welcoming work colleagues to work.

Greetings colleague, did you watch the football last night? I didn’t have to as I had already predicted the score!

AI now creeps into our daily work processes and leisure time – both of which now likely use the exact same device, something interesting for those who track the accumulation of risk within a large organisation. And although this is now beginning to change, it is still the case that most AI applications use the power of the Cloud rather than on-device computing. Sending requests to an AI service in the Cloud brings with it security, privacy and general Information Governance risks. Just as an organisation that runs on Sharepoint does not want an employee running their own Google Drive or Dropbox fiefdom, it should be a matter of proactive, informed choice to use AI in everyday workflows.

Shadow AI

The most likely ‘Shadow IT’ type of unofficial AI use will be Generative AI. Most casual use of ChatGPT will not cause issues immediately, but will your carefully crafted strategic messaging and communications plan eventually be subverted by the generic? Will you lose control over the images used in presentations and accidentally infringe someone else’s IP? Could someone decide to draft new contractual clauses based on what ChatGPT or some other service has written?

Then there are the questions of intellectual property that linger over the training data that was used in the creation of many GenAI applications, and the potential consequences of using GenAI in such a way that copyright or other IP might be inadvertently infringed. One immediate question is whether there a satisfactory baseline level of knowledge and awareness in organisations around these kinds of AI use cases?

Strategic response

For organisations that need to make decisions that might impact the legal, financial, health or other protected status in law, there needs to be considerable levels of organisational maturity and preparedness if these decisions are to be made with the assistance of AI decision-making tools. In fact only this week I was with DWG colleagues presenting the findings of a business analysis to a client in the City of London and it emerged clearly that the question of selecting appropriate AI tools for business transformation is a burning one for corporate Boards everywhere.

Unless there is an overarching strategy and employee consensus on how to manage change, organisations will likely accumulate unquantified risk while losing out on opportunity and value. Get in touch with me to discuss what all this might mean for you and your company’s leadership, employees and partner ecosystem.

Rafael Bloom
Latest posts by Rafael Bloom (see all)