What is Generative AI and is it the Way to AIOps?

By Lori MacVittie, F5 Distinguished Engineer.

  • 1 year ago Posted in

Generative AI is an application of machine learning that can create a variety of content such as text, images, or audio from natural language prompts. It gained broad popularity with the introduction of ChatGPT—an OpenAI project—that resulted in an explosion of new uses across industries.

If you haven’t tried ChatGPT, I encourage you to take a moment and ask it a few questions. Ask it to tell you about, well, you or someone in history or explain how something works. While caution is advised—ChatGPT isn’t always right—it is an eye-opening experience because it is a new experience.

What ChatGPT has done is supply a proof-of-concept for Generative AI. It has given us a glimpse into the possibilities of how we might work differently and, for us in the F5 Office of the CTO, some interesting exploration into how it might be applied to app delivery and security.

From imperative to declarative to generative

One of the challenges in infrastructure is configuring the myriad devices, services, and systems needed to deliver and secure even a single application. Organizations rely on an average of 23 different app services—if you exclude ‘as a service’ offerings.

Now, I don’t have to tell you that configuring a web app and API protection service is different to configuring a plain old load balancing service. What that means is that the folks responsible for configuring and operating app services may need to be experts in a dozen different languages.

The industry has been trying to address that for years. When APIs became the primary means of configuring everything, app delivery and security services were no exception. Everyone started with imperative APIs, which simply changed how you issued commands. Instead of typing commands on a CLI you sent API commands via HTTP. Fairly soon it became clear that the API tax incurred by relying on imperative APIs was too high, and the industry shifted to declarative APIs. But unfortunately, most of the industry decided declarative meant “configuration as JSON.” So instead of the intent (that word is important, so remember it) behind declarative, which is “tell me what you want to do, and I’ll do it for you,” we ended up with “here’s the configuration I want, go do the hard work of doing it.”

It's not quite the same, and it still needed the same level of expertise with the operating model specific to a given solution. I’m not sure the industry ever reached agreement on whether load balancers used “pools” or “farms,” let alone the more complex details of how virtual servers interact with real servers and application instances. So, all the industry did with declarative was to offload the command-level work from operators to the system.

Now, what Generative AI brings to the table is a form of low code/no code. These are more reliable than some results because they’re based on well-formed specifications that guide the generation of results. There are only so many ways you can write “hello world” after all, while there are millions of ways to answer a question.

Which means I should be able to tell a trained model, “Hey, I want to configure my load balancer to scale App A” and the system should be able to spit out a configuration. But more than that, I should be able to tell it, “Give me a script to do X on system Y using Z” and BAM! Not only should it generate the configuration, but the automation necessary to deploy it to the right system.

Oh look. It already does.

Certainly, this is not production ready code—neither the IP nor credentials are valid, and it picked Python (not my first, second, or third choice)—but it’s 90% of the way there based just on publicly available documentation and a remarkably simple prompt. The more detailed the prompt, the better the results.

Again, not ready to deploy, but it’s much closer to being functional and took literally less than fifteen seconds to generate with no training from me.

Beyond generation to automation

But this is the easy stuff. I should further be able to tell it, “Oh, by the way, deploy it.” And the thing should do it while I’m enjoying my morning coffee. And perhaps sing me a little song, too.

But wait, there’s more! What if I can also tell a Generative AI system later, “Hey, users in Green Bay are logging in a lot and performance is down, clone App A and move it to our site in Milwaukee.”

And it does. Because under the hood, all of this is just a web of APIs, configurations, and commands that can and are often automated by scripts today. Those scripts are often parameterized, which loosely correlates to the parameters in my AI prompt: Green Bay, Milwaukee, App A. So what changes is the generator, and the speed with which it can be generated.

I often say that AI and automation are force multipliers. Because technology doesn't know what it needs to do, we do. But AI and automation can do it much faster and efficiently, effectively amplifying productivity, increasing time to value, and freeing up experts’ time to focus on strategic decisions and projects. And over time, the AI can learn from us, further multiplying our capacity and exposing new possibilities.

This is no longer science fiction but computer science reality.

Generative AI will enable the AIOps we need Many of today’s AIOps solutions focus solely on delivering the insights 98% of organizations are missing.

They answer yesterday’s problems, not tomorrow’s needs.

Even those AIOps platforms that can act more autonomously—like security services—are highly dependent on pre-existing configurations and well-formed responses. It doesn’t typically use AI to enable operations to execute more autonomously across the heterogenous app delivery and security layers. They use AI for data analysis and uncovering insights we, as humans, don’t have the ability or time to uncover. But that’s where it often ends, at least for layers above the network and well-understood security problems.

That’s where Generative AI can take over, and why I’m all in on investigating just how far we can take this technology to make app delivery and security ridiculously easy.

Welcome to the tip of the AI iceberg.

By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.
Companies are facing a Catch 22 when it comes to the need to invest in new forms of AI, whilst...
By Mahesh Desai, Head of EMEA Public Cloud, Rackspace Technology.
By Narek Tatevosyan, Product Director at Nebius AI.
By Mazen El Hout, Senior Product Marketing Manager at Ansys.
By Amit Sanyal, Senior Director of Data Center Product Marketing at Juniper Networks.
By Gert-Jan Wijman, Celigo Vice President and General Manager, Europe, Middle East and Africa.