How can IT teams safely adopt AI?

By Mark Molyneux, EMEA CTO at Cohesity.

  • 1 year ago Posted in

Chatbots like ChatGPT have inspired IT vendors to upgrade their own systems accordingly. But how do you use these powerful AI modules without opening backdoors that allow sensitive information to end up in external learning machines? 

Generative AI chatbots, such as ChatGPT translate complex technical data into human-readable language in seconds, as such it’s no surprise that CEO’s see great opportunity with the technology. According to a Fortune / Deloitte survey of 143 CEOs worldwide, 79% of those surveyed want to see investment into generative AI because they hope for efficiency gains. Every IT team will sooner or later have to effectively utilise AI tools like ChatGPT, because there will be corresponding guidelines from the company management.

IT teams are faced with a complex task. The innovation cycles in AI are extremely short, which means that the jungle of new approaches, concepts and solutions is getting denser every day. Almost every major IT hardware and software vendor has added AI elements to their solution, and their share is growing.

However, if IT teams use AI too ambitiously and quickly, they could unknowingly open up new security gaps. Endor Labs examined the open source packages in the AI technology stack and found that more than half of the project references have vulnerabilities in their manifest files. Just five months after release, ChatGPT's API is used by more than 900 "npm" and "PyPI" packages with various problem domains.

How can IT teams leverage the potential of this technology without creating new gaps? Best advice: Introduce AI slowly and in a controlled manner.

A good starting point for AI is business operations and all those processes and work steps where the IT teams have to intervene manually and invest a lot of time and effort to complete the process successfully, an automated LEAN approach if you like.

Anyone who has identified these potential areas of application can clarify which generative AI options there are in this area. These candidates should be screened for the details of what data they are accessing, what internal and external data sources they are contacting, and with whom they are sharing their knowledge. Because certain AI modules are designed globally and automatically forward internal data to external servers. Some companies were surprised by this development and learned painfully that their users were feeding internal data into these global learning machines. In April of this year, engineers at Samsung uploaded company secrets to ChatGPT and made them the subject of learning for a global AI - the worst case from a compliance and intellectual property perspective.

Manufacturer-driven AI approaches are often defined by their own environment and reveal how they work. This allows IT teams to accurately assess the risk and rule out possible external data exchange. The AI is self-contained and can be introduced in a controlled manner. IT teams can also be very selective about which internal systems and data sources the AI modules actively examine. You can start with a small cluster and introduce the AI in a highly controlled way.

Leave time to learn

When the modules are activated, their operation and suggestions should be monitored by a member of the IT team for an extended period of time. AI will always act better and more accurately the longer it works and the more data and patterns it has collected in the individual environment of the company. In the early days, it will therefore make more mistakes as it learns. Data is key here, the more data the better. RAG is also a recommendation here, to augment learning data.

However, the modules can quickly suggest the right measures, especially for standard tasks. If, for example, IP addresses from suspicious geographic regions are blocked by a firewall per se, the AI can suggest the right rules and a member of the IT team only has to approve them. The AI only suggests, while humans make the final decision.

These clear situations, in which users would otherwise create rules and intervene manually, can be found in many other areas of IT. Here the AI can very quickly reduce the workload; seeing the wood from the trees

The smarter AI gets, the more critical decisions it can prepare and massively help teams save time by pre-sorting the daily information overload in business operations. Today's environments generate petabytes of logging data on a daily basis, which contains indications of various security or performance-related questions. In most environments, 80 percent of this information is not evaluated at all because the volume is too great.

AI can process this data on a daily basis and make it accessible for voice queries. You can ask the data questions like "Show me all systems with this patch level and this vulnerability" without having to write a single script as before. Teams will be better informed and thus make better decisions.

Open the door to the outside

In particular, AI modules from the security segment are often instructed to update externally - about new attack methods, vulnerabilities, patches and so on. Here you should take a close look at what information is going out and what data is going back. No CISO will accept that details about their own network structure and current vulnerabilities and patches are fed into an external AI engine.

It is worth analysing in a pilot test which data is actually fed in. Manufacturers with mature AI approaches can show their customers in detail what information they enter and whether there are ways for the customer to filter out certain elements from the outset. Examining every single input will most likely be impossible on a day-to-day basis due to the volume of updates to the AI modules. Unless someone develops an AI control authority, which in turn controls the input and output of the AIs from a security point of view.

However, IT teams should embark on this journey, because the potential of self-contained AI in their own network is immense. Every task area will benefit from this and synergies will automatically arise. Imagine that the HR AI exchanges information with the security AI and the inventory AI about the fact that a remote working user from the finance department is accessing research servers via his/ her smartphone. And a correspondingly correlated warning message goes to the security team.

In the world of Data Protection & Recovery, AI could take over the important but repetitive and boring work, and escalate events to the IT/Security teams when it becomes important, complicated, and exciting. Additionally language model based AI like ChatGPT allow users to interact with the other underlying AI mechanisms in an easy way. AI can dramatically reduce the massive toll on IT and Security teams by doing many of the important but tedious tasks itself. Providing comprehensive reporting, and clear and concise next steps, giving a wood-from-the-trees perspective to operational groups that are undersized for the difficult jobs at hand. In this way, AI can make a massive contribution to increasing cyber resilience against attacks that, ironically, are increasingly being carried out by AI.

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.