Five tips for implementing AI without risking your data: Kyocera CISO

Andrew Smith, Kyocera’s CISO, has shared his top five tips to make sure any organisation can take advantage of AI, without the security pitfalls.

  • 3 months ago Posted in

The Gen AI bubble might not be growing as quickly as it was in 2023, but as adoption continues apace, organisations across the globe are still being caught out by outdated security protocols.

 

To combat the risks associated with AI and to help more organisations take advantage of it, Andrew Smith, CISO for Kyocera Document Solutions UK has shared his top five tips for making sure any organisation can implement AI without putting their data security at risk:

 

Tip 1: Avoid using personal or proprietary information in Gen AI LLMs

It is not common knowledge how and where data is used when utilising generative AI models. Often end users do not know the sensitivity of the data they are uploading and are more focused on the potential outcome AI technology can generate. The important approach for business leaders is to ensure they do not restrict AI use, and in turn create shadow use, but instead provide education to users on how to safely use AI in addition to providing AI models that are safe to use in the business domain.

 

Tip 2: Create a company policy on AI & Privacy

From my experience the challenge colleagues are facing here is the lack of reference material and best practise in which to build from. Instead, the source of reference is best practise in data use, safety and privacy and adopting this approach in the utilisation of AI. This way the core topic in which is the data is utilised and generated is protected and considered by the foundation of well-established data and privacy policies.

 

Tip 3: Manage data privacy settings

Data privacy settings are challenging in this space with many different AI toolsets being launched on a daily basis being web based.

Our approach in this space is utilising broader data privacy controls and data boundaries and sources to ensure extraction of data is understood and controlled prior to it being uploaded to insecure sources.

As more private AI tools and models are released, IT have the ability to control the use cases and abilities of the toolsets as well as expanding the outcomes and outputs of the technology. This is where we believe mainstream adoption may be achieved.

 

Tip 4: Regularly change passwords and use data access controls

It is important that companies have strong IT policies that guide and control how users use systems and in particular the rules in which they must comply with. Modern IT platforms and data loss prevention policies and controls allow IT to have greater influence on user behaviour, BUT end user education is always essential to ensure the best possible protection for corporate IT systems.

 

Tip 5: Audit AI interactions and monitor data breaches

The important element with trying to audit AI use and subsequent data breaches is first to ensure there is strong guidance around permitted use cases and to utilise work groups that understand how users want to develop business operations utilising AI.

Depending on the AI use case and in particular with new private AI models there are options for IT to have much greater control and insight.

For data breaches, it is essential to utilise IT controls alongside industry leading Cyber toolsets to monitor and spot potential data leaks or breaches. 

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.