This AI is very close. But that AI is far away.
Fans of the classic sitcom Father Ted may recognise this as a riff on the sequence when Fr Dougal is struggling to understand the relative sizes of cows. Something similar is happening to executives as the try to adopt AI in a hurry – tools like ChatGPT are very close and feel like they ought to be capable of being introduced fast, whereas the finely-tuned LLMs working their way through governance and risk feel a long way off.
This reflects the paradoxical nature of generative AI and how it is perceived at two levels. Firstly, it is the darling of the hyperscale cloud service providers who have all of the compute power necessary to build and run LLMs and are now pushing them onto commercial organisations via licences and private cloud instances. No other type of business would now countenance building something similar. Alongside this, a tidal wave of very small start-ups is front-ending these solutions, leading to a major imbalance in terms of partner relationship management, due diligence and support – small providers at the front, global giants at the back with clients squeezed in between.
Secondly, the scaled-up solutions that are being introduced seem to be tackling business problems and processes that suddenly look minor. Launched a virtual assistant for customers or agents? Great. Is that going to transform the organisation and unleash much-heralded exponential growth? Unlikely.
The problem is that generative AI (genAI) is so close at hand that executives cannot understand the differences in scale between B2C and B2B solutions. Every day, we hear stories about senior business leaders announcing to their team that their children have created something amazing using a free genAI solution and demanding to know why they can’t come up with something like that.
ChatGPT, Bard, Gemini, and the rest have been launched into the world in fully consumerised ways, meaning few barriers to entry in terms of skills or cost. But translating that into an effective tool – even just for productivity in everyday tasks – without exposing sensitive and commercial data to the world is much more difficult.
During the panel I ran at The London AI Summit, this question of how to apply guardrails without stifling innovation was very much to the fore. Where existing innovation and governance processes were already in place, scaling up has been a quicker and less risky affair. Lloyds Banking Group was quick to recognise the potential of ChatGPT and also to set up its Control Tower approach which considers, prioritises, governs and controls all new AI projects. As a result, it is among the leaders in this sector for adoption.
What I was reminded of most strongly and which perhaps offers the best way to understand how this will play is the explosion of personal devices, such as mobile phones and laptops, during the early 00s. Initially, organisations resisted their use at work, but over time developed their Bring Your Own Device policies so that colleagues could enjoy the familiarity of their preferred tech without over-exposing the business to risks.
Similar policies will need to emerge around genAI to take the consumer-level enthusiasm which might otherwise lead to rogue and grey adoption around the enterprise. Better to say to the business that it is moving closer to AI than to try to keep it a long distance away.
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024