Search K
Appearance
Appearance
In this release, we've officially refreshed the product UI to adopt the F5 branding — and some new terminology. Here's a list of what's changed, and what hasn't.
What's changed:
What hasn't changed:
NOTE: Because of the terminology change from scanners to guardrails:
- All F5 guardrails will need to be updated manually. To update, click the Update available button next to each guardrail. The only change is to the name, not the functionality or efficacy, so there is no need for testing if these are in use.
- Customers using the verbose configuration in the scans api will need to update their code to use the new guardrail names.
- Bookmarked product URLs may need to be re-created.
Users can now mark projects as “live” to distinguish test or development projects from customer-facing or production applications.
Comprehensive AI Security Index (CASI) leaderboard data are now available in the AI Red-Team product. CASI is a benchmark metric designed to help you understand exactly how secure a model is by measuring its ability to withstand common prompt injections and jailbreak attacks. A higher CASI score indicates a more secure model. Every month, we test dozens of the most popular models using our latest signature attacks to identify their "defensive breaking point"—the minimum resources an attacker needs to successfully compromise them.
What this means for you:
The March signature attack pack of 10,000 new malicious prompts introduces a sweet and deadly new attack vector, sugar-coated poison. This is a two-stage attack that tricks AI models by starting with a friendly conversation before introducing harmful requests. The attack first replaces harmful words in a prompt with their opposites—like "secure" instead of "attack"—to bypass safety filters. Once the model begins a helpful, safe response, the attacker uses the model's own progress to slip in a malicious request that the AI might no longer recognize as a threat.