Vertex AI Agents

Google Cloud Platform’s Vertex AI offers a comprehensive suite of tools designed to simplify the process of building, deploying, and scaling machine learning models. One of the standout features of Vertex AI is its support for Agents, which are frameworks that enable seamless integration and automation within AI workflows. In this blog post, we’ll delve into the functionalities of Vertex AI, focusing particularly on how Agents enhance the AI development lifecycle, and why they are critical for businesses aiming to leverage AI technologies effectively. Recently Google Next finished up and announced some enhancements to the offering the use of Agents are being integrated in the platform in a preview this is to show how you can use this in your use-case of leveraging Gemini 1.5 Pro.

Understanding Agents

Agents essentially refer to entities within the platform that facilitate various automated tasks and integrations, this allows the machine learning models to be dynamic and interactive. These act as intermediaries that can execute actions, gather data, or trigger sequential processes based on specific conditions or schedules.

At a high level you can think of a agent action on behalf of the desired actions you’d like to accomplish such as a handler, this will make more sense when we break down this demonstration of using Vertex AI Agents.

Getting Started

Since we are using Vertex AI this is housed in the Google Cloud Platform and I’m using a development project if you incur costs or don’t want to incur costs you can follow along but as a warning delete resources after you complete so you don’t incur costs.

Pre-requisites

  • Google Cloud Platform Account (Project)
  • Vertex AI (API’s Enabled)

If we search for the service Vertex AI we should see the following results as shown in the image below we are navigating to the dashboard.

Now if we want to use open-source models such as Ollama or Mistral we can navigate to the Model Garden this is similar to the Catalog in Azure AI Studio that allows you to use other LLM’s-as-a-service.

Now specifically we want to use the Vertex AI Agent Builder this illustration above is to show you if you’d like to customize the underlying model that is leveraged you can enable the API so this will allow your agent to leverage the other LLM’s.

Navigate back to our search bar and search Vertex AI Agent Builder this will look like the image below.

Once you are on the page it will have the following listing as Apps, Data Stores, Monitoring, Settings.

I’ve created a Agent App for a course but we will start with the Creation from scratch. Select Create App (+)

This will have the wizard populate on the following items we are selecting Agent.

So when we think of what is the Agent we have a few items to define the purpose of the agent essentially what is the high-level goal that we are trying to accomplish, then we have the Instructions these will synthesize the behaviors in a step-by-step execution to achieve our Goal.

To extend the capability such as access external data stores of knowledge perhaps a API that loads data for your goal such as a search engine to assist with the user query or known artifacts or data that you’d like to achieve is the tools purposes.

Now we have to save our inputs from here to update our Agent App then we can Navigate to Tools, I’ve retrieved the OpenAPI Spec from OpenLibrary.org as a example but this can be broadly any external OpenAPI Spec of YAML that you’d like to reach out to.

You can now save this as Books API and we can navigate back and start testing out our agent.

On our Agent App we now have the updated Books API that can be selected to add to our Agent.

Navigating Guard Rails

Like any application that is used from a production standpoint in the use of Generative AI we need guard rails on our applications such as banned terminology and checking of prompts to ensure nothing is populated to the end user. While this specific service is in preview its important to note if we are use Platform-as-a-Service we are responsible on how we push this out. If we navigate to Settings we can see the following areas that are used for guard rails on our agent.

We populate it with the following and can test this out further.

So now when we go to our Agent and start testing with user input we start with a longer prompt to be specific.

Now referring back to our banned phrases we can also test the use of a banned phrase and see what the response is.

Demonstrating the Use

For resetting the conversation we are initializing another session in this case we are going to expand our search for a defined time parameter and ask some questions further on reviews of a book.

We can also see behind the hood if we access Original Response what is abstracted and occuring.

Additionally if we refer back to what is occurring when we set our filters to ban specific phrases, utterances of terms that we deem aren’t responsible use the raw output will populate as shown.

Summary

Agents are going to primarily be a additive and powerful offering on Vertex AI and the seamless use of being able to switch between what is examples so the virtual agent is aware ahead of time will make this easy for organizations to consider leveraging this tool. Like all Generative AI uses however, you must model based on the user inputs and what can go wrong, as such most platforms are pushing out uses of content filtering, management based on the customers customization you can make this as locked down as possible. It’s important to consider the core concepts of Responsible AI use but to regularly review the phrases, content and wording that is banned and test the applications out thoroughly. I will post more on using Generative AI Agents and other frameworks in the coming weeks stay tuned.