Promptflow is an open-source development tool created by Microsoft to streamline the end-to-end development lifecycle of LLM-based AI applications. When building agentic workflows, having a good foundation of the workflow you’ll want something that is modular, reusable and provides visibility. To effectively utilize this tool, it’s important to understand its key concepts, which can be explored through the following areas.
- Flows (Think of this as a chain of structured workflows for your code to execute)
- Tools (This is the extension of either functions, pre-built tools or custom tools)
- Tracing (This tool allows the use of tracing so you can debug on your flows and pinpoint the areas of concern)
- Connections (Broadly this is the API of your LLM supported currently from documentation is Azure OpenAI, OpenAI)
While much more of this tool does exist for the purpose of the introduction this is to show you how you’d integrate this tool and some points on using this tool kit. You can use this either locally via VS Code Extension or through the Azure AI Foundry assuming you have a compute instance to run the prompt flow you’ve created as well for testing.
Architecture
This at a high-level represents the code we will execute in a file called flow.dag.yaml we define our process for our “flow” this is represented as shown with the nodes representing the entry point then our steps follow.
We define in the name the task then the type representing what we want to have this use with LLM then the source with defining where the code and path representing the file we are referencing.
extract_query_from_question.jinja2
# system:
You are an AI assistant reading the transcript of a conversation between an AI and a human. Given an input question and conversation history, infer user real intent.
The conversation history is provided just in case of a context (e.g. "What is this?" where "this" is defined in previous conversation).
Return the output as query used for next round user message.
# user:
EXAMPLE
Conversation history:
Human: I want to find the latest research on Generative AI Security, could you help me with that?
AI: Sure, I can help you with that. Here are some key considerations in regard to Generative AI Security.
Human: What is OWASP Top 10 for LLM's? Could you provide me with some insights?
Output: Insights on the questions context pertaining to Generative AI Security
END OF EXAMPLE
EXAMPLE
Conversation history:
Human: Can you give me more information on the OWASP Top 10 for LLM's?
AI: Sure, some key points of the OWASP Top 10 for LLMs include Prompt Injection, Sensitive Information Disclosure, Insecure Plugin Design, and Runtime Protection and API Security along with some others.
Human: What is prompt injection?
AI: Prompt injection can include crafty inputs can manipulate an LLM to perform unintended actions. This includes direct injections that overwrite system prompts and indirect ones that manipulate inputs from external sources.
Human: Show me more on Sensitive Information Disclosure
Output: LLM Security Techniques/Methodology
END OF EXAMPLE
Conversation history (for reference only):
{% for item in chat_history %}
Human: {{item.inputs.question}}
AI: {{item.outputs.answer}}
{% endfor %}
Human: {{question}}
Output:
We tell our LLM our intent of Extract Query from Question in Jinja2 template to direct how to respond and with a examples. We then directed it to the next item in our node for brevity this flow can be found in this repository.
For our Exa.AI this will require a API key from Exa however this is a snippet how the flow.dag.yaml interprets this.
When we see the term DAG this represents Directed Acyclic Graphs of function calls these are flows in promptflow.
Creating our Connection
For the LLM to interact with our flow we need to establish a connection using promptflow cli tool this is simple as running the following.
First we create a file representing our connection this can be connection.yaml or however you’d like to store it.
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: open_ai_connection
type: azure_open_ai
api_key: <test_key>
api_base: <test_base>
api_type: azure
api_version: <test_version>
Simply save this file locally and then run the following command to execute the connection.
pf connection create -f openai.yml
Now this connects our LLM via the API to be used and called the only modifications in our flow.dag.yaml need to represent the deployment name of our LLM in Azure AI. If your using OpenAI this can also be represented by the following.
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: open_ai_connection
type: open_ai
api_key: "<user-input>"
organization: "" # optional
Now once we have our environment variables set up for our LLM we need our Exa AI API key for this stored I did this locally ideally you’d have a KMS System pull this via a shell script in real-time so this isn’t stored locally however for this demo. I’ve created a .env file with the API key and also have this notated in .gitignore.
In the repository linked above use the .env.example as a example for your own promptflow.
Testing the flow
Once we’ve established our connections and our code is working we can test our flow running the following.
pf flow test --flow . --interactive --verbose
We can also access the promptflow service locally at 127.0.0.1:2334 you’ll also see on the directory when you run it will create some files this is logging your flow. Additionally if you want to turn off telemetry that is collected run the following.
pf config set telemetry.enabled=false
On the run once we’ve entered our query, I’ve asked the following question.
This goes from Exa -> Search Results -> Augmented Chat -> Bot Response
I’ve also ensured that the sources are passed back to the end user to show where this information was gathered from.
Reviewing Promptflow Run
The localhost mentioned earlier is accessible to show you how the code executed step-by-step.
I can also break down each response using a Gnatt Chart as shown this appears on the top right once you toggle the switch it should open the chart.
Once you navigate back to collections you’ll see any previous prompt-flows you’ve ran that are collected here.
This will tell you the amount of tokens used for the flow as well so you can have a view on what type of prompts involve a amount of tokens for cost management.
Back in our IDE (Visual Studio Code) in my environment I can click on the flow.dag.yaml and open up the Promptflow Extension this shows the visual and tool kit in my editor.
I’m usually a visual person so once my code is complete this allows me to use the flow to visualize the process and make modifications if needed based on the flow.
Summary
Promptflow is a robust development toolkit for LLM-based applications, which I have been utilizing since its early release. It is extensively integrated into the UI/UX of Azure AI Foundry, underscoring its importance in Microsoft’s AI ecosystem. As the momentum around AI agents continues to grow, I anticipate that Microsoft will further invest in the Promptflow project, particularly through integrations with AutoGen framework. If you are developing an LLM-based application that requires a consistent structure and built-in telemetry, Promptflow is a tool worth considering. If you found this post helpful, please share it with others who might benefit from understanding Promptflow.