Generative AI offers great potential, but we must approach it with caution

By Tony Stanford, Northern European R&D Technical Operations Leader at 3M.

  • 1 year ago Posted in

Generative AI has caused contention in the world of big tech, largely dividing those that are in favour of the technology and those that are treating it with caution. This new type of artificial intelligence technology can produce various types of content, including text, imagery, audio, and synthetic data by tapping into huge volume of structured and unstructured data and using complex models of reasoning called large language models (LLMs). This new technology represents a leap in AI capabilities, which has the potential to transform how we use data. Whether you want to plan a trip, eat something healthy, write a story, or learn about coding, the generative AI will transform many areas of our lives from how we search for information, through to how healthcare is delivered or how business operate. However, with all the benefits of generative AI come some important concerns. For instance, there have been multiple reports of generative AI chatbots misrepresenting data and providing misleading information (the so-called AI hallucinations). These issues could have significant harmful impact if the technology is not adequately regulated as the consequences are not yet well-understood.

To be able to make the most of the exciting opportunities that AI can offer, we need to build ethical parameters that define the value and purpose of using data at a corporate or government level. Lack of consideration of these issues can result in a misuse of AI.

The dangers of not fully understanding generative AI

One of the biggest challenges with generative AI is that as an emergent technology, its properties are not fully understood and therefore the impact of the wider use of AI cannot be fully controlled. This led to an open letter from the Future of Life Institute earlier in April, including over 26,000 signatures to pause the building of more powerful models for six months and examine the ramifications of the deployment such models and get proper regulatory oversight.

One of the fundamental issues with this technology, flagged by some of the leading authorities on AI in the world who signed the open letter, is the lack of ‘common sense’ or what AI researchers call ‘coherent mental models of everyday things”. This lack of real-life experience and understanding of the everyday world can cause significant flaws with the ‘reasoning’ of the AI algorithms resulting in AI malfunction or misuse for dangerous causes.

AI misuse has been well covered in recent months, but there are several possible different scenarios that can result from their misuse. For example, AI could be weaponised for drug-discovery tools which could be used to build chemical weapons. AI-generated misinformation could destabilise society and “undermine collective decision-making”. There is also the risk that the power of AI could become increasingly concentred in fewer and fewer hands, enabling “regimes to enforce narrow values through pervasive surveillance and oppressive censorship”. There is an additional risk of enfeeblement, where humans become dependent on AI. The Centre for AI Safety has outlined these potential disaster scenarios in their guidance, as there is a growing realization that generative AI is going to impact every industry and every job in some way. That is a bit unsettling mostly because little is understood about how or why these models are performing the way they do. The European Parliament has formulated rules governing the safe and transparent use of Generative AI, as a first step towards regulating this application space. However, more needs to be done by the tech industry to address the issue.

Emerging developments such as Constitutional AI have the potential to mitigate the harmful usage of generative AI. This approach involves drawing up a “constitution” on what proper behaviour is for a chatbot. If the chatbot generates a response that violates the constitutional principles, the response is revised until it is acceptable before it gets shared with the user. Start-ups like Anthropic are already pioneering this model and successfully using it to moderate AI behaviour.

Considering the wider applications of generative AI

This leads us to more complex questions that need to be tackled to address these challenges. For instance, how can those that develop and sell AI technology, including generative AI, ensure that both the creator and the customer can feel moral confidence in AI’s decisions? What are the best practices for embedding ethical decision-making within AI technology?

To answer these questions, we need to focus on levelling up responsible AI practices in every organisation to have a robust AI compliance framework in place. This includes controls for assessing the potential risk of generative AI use cases at the design stage and a means to embed responsible AI approaches throughout the business. It is important that organisations recognise their AI management principles should be led from the top and translated into an effective governance structure for risk management. This means being responsible by design, through a framework that includes principles and governance that will consider the ethical considerations that go hand in hand with the responsible use of AI.

This means that businesses and governments will need to evolve the way they use the technology and consider more deeply the ethical implications of AI. Understanding the behind-the-scenes of generative AI is particularly important for critical services and products such as medicines and healthcare treatments, policy enforcement and security among others. Many of these companies have ethics boards which are responsible for overseeing the ethical implications of new products, policies, and treatments. One may not really care whether the article they read has been created by generative AI but may care greatly why a certain medication was chosen for treatment.

At 3M Health Information Systems, we have adopted an approach that puts guardrails in place, to ensure safe and effective use of Generative AI. The guardrails include having a human review of content before being presented to caregivers and always having verifiable explanations of the content generated. Patient care and correct information to provide that is of utmost importance for us and our customers.

Putting ethical considerations at the heart of generative AI models

As generative AI becomes more ubiquitous, governance will play an increasing role in defining the ethical parameters of the technology and its broader implications. Governing bodies for the ethical use of data will have to understand its breadth and depth and potential uses, not just now but also in the future.

This will enable organisations working with AI to protect individuals and businesses by creating an ethical model for AI use, access and application that spans every sector and area of society. However, achieving this will require a balance between establishing strong AI policies and providing organisations with enough flexibility to be able to innovate and grow within the parameters of these policies.

Generative AI is a technology that is developing incredibly quickly, making it even more important for companies using the technology to embed strong values, transparency and integrity models around its development and application. It’s also important for customers and other stakeholders to be aware of how generative AI and other types of AI use their data to drive decision-making, so educating the wider business community about the power of AI is an important step towards identifying and addressing ethical concerns. This means that every organisation or individual developing or using generative AI should apply an integrity and value model to everything that they do.

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.