24th March 2025
De Vere Grand Connaught Rooms
7th & 8th October 2025
Radisson Blu Hotel Manchester Airport
Search
Close this search box.
Professional Security Magazine
Professional Security Magazine

Plugging security gaps with AI without creating sdditional risk

AI has the potential to transform every aspect of business, from security to productivity. Yet companies’ headlong, unmanaged, rush to exploit innovation is creating unknown and understood risks that require urgent oversight, argues Mark Grindey (pictured), CEO, Zeus Cloud...

Business Potential

Generative AI (Gen AI) tools are fast becoming a core component of any business’ strategy – and one of the most powerful areas of deployment is IT security. Gen AI has a key role to play in addressing one of the biggest challenges within current IT security models: human error. From misconfiguration to misunderstanding, in a complex, multi-tiered infrastructure, that includes a mix of on premise, public and private cloud deployments and multi-layered networks, mistakes are easy to make. 

With hackers constantly looking to exploit such faults, with common attacks targeting known weaknesses, AI is fast becoming a vital tool in the security armoury, providing companies with a second line of defence by seeking out vulnerabilities. The speed with which AI can identify known vulnerabilities and highlight configuration errors is transformational, allowing companies to both plug security gaps and also prioritise areas of investment. It is also being used to highlight any sensitive data within documents – such as credit card or passport numbers – that require protection; and providing predictive data management, helping businesses to accurately plan for future data volumes.

Unmanaged Risk

With ever expanding data sources to train the AI, the technology will only become more intuitive, more valuable. However, AI is far from perfect and organisations’ inability to impose effective control on how and where AI is used is creating problem after problem. Running AI through internal data resources raises a raft of issues from the quality and cleanliness of the data to the ownership of the resultant AI output. Once the commercially available AI tool, such as Copilot, has viewed a business’ data, it can never forget it. Since it can access sensitive corporate data from sources such as a company’s SharePoint sites, employee OneDrive storage, even Teams chats,  commercially sensitive information can be inadvertently lost because those using AI do not understand the risk. 

Indeed, research company Gartner has urged caution, stating that: “using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally, because it supports easy, natural-language access to unprotected content. Internal exposure of insufficiently protected sensitive information is a serious and realistic threat.”

Changes are required – firstly to company’s data management strategies and secondly to the regulatory framework surrounding AI. Any business using AI needs to gain far more clarity regarding data exposure: can data be segregated to protect business interests without undermining the value of using AI or inadvertently undermining the quality of output by providing insufficiently broad information? Once used, who has access to those findings? How can such insight be retained internally to ensure confidentiality?

Regulatory Future

Business leaders across the globe are calling for AI regulation but as yet there is no consensus as to how that can be achieved or who should be in charge. Is this a government role – but if each government takes a different approach the legal implications and potential costs would become a deterrent to innovation.

Or should the approach used to safeguard the Internet be extended to AI, where key policy and technical models are administered by the Internet Corporation for Assigned Names and Numbers (ICANN)? Do we need AI licenses that required AI certified individuals to be in place before a business can run any AI tool across its data? Or simply different licensing models for AI tools that clarify data ownership, for example by using a tool within its own tenants within a client account to reduce the risk of data leak? The latter would certainly be a good interim stop gap but, whatever regulatory approach is adopted it must be led by security engineers, impartial individuals who understand the risks; and who are not influenced by potential monetary gain – such as those who have committed to the Open Source model.

There are many options – and changes will likely result in a drop in income for AI providers. But given the explosion in AI usage, it is time to bite the bullet and accept that getting the right solution can be uncomfortable. It’s imperative to quickly determine the most efficient approach that is best for both the industry and for businesses, an approach that accelerates innovation while also protecting commercially sensitive information.

YOU MIGHT ALSO LIKE

Leave a Reply

Your email address will not be published. Required fields are marked *