Tag: AI Security
-
Automated Red Teaming Agent in Azure Foundry
Your organization is likely navigating methods and uses of Generative AI whether this is innovation of a existing application that is internal to operations or a external web application the use of this technology should be thoroughly evaluated prior to release. You’ve likely encountered the term “Prompt Injection” however you’re also aware of automation that…
-
PyRIT for LLM Security
Microsoft launched PyRit (Python Risk Identification Tool) back in 2024 this serves as a open source framework to identify risk with Generative AI systems using the framework to test with multiple methods of attacks. Given the expansion of methods for Jailbreaking systems this allows for the dynamic adaption of attacks to quickly automate processes of…
-
Garak Red Teaming LLMs
As Generative AI is playing a role in multiple organizations so is the popularity of tools for identifying risks and vulnerabilities. In this blog I’m exploring Garak a LLM vulnerability scanner developed by NVIDIA and is a OSS project to help strengthen LLM Security. When the term “Red Team” appears in the approach of simulation…