Skip to content

Feb 18, 2026 - SaaS - v9.159.5

Platform

So long, CalypsoAI – and hello F5!

In this release, we've officially refreshed the product UI to adopt the F5 branding — and some new terminology. Here's a list of what's changed, and what hasn't.

What's changed:

  • The F5 logo has replaced the CalypsoAI logo.
  • The product suite name is now F5 AI Security.
  • Scanners are now called guardrails everywhere in the product. CalypsoAI scanners are now F5 guardrails and custom scanners are custom guardrails.
  • CASI now stands for Comprehensive AI security index.
  • Agentic warfare is now agentic resistance and the agentic warfare resistance score (AWR) is now the agentic resistance score (ARS).
  • The AI assistant is now simply called Assistant.

What hasn't changed:

  • API endpoints. There's no need to update any of your integrations (with one exception: see the note below).
  • Documentation. The new terminology will be rolled out into our API and user docs in the coming weeks.

NOTE: Because of the terminology change from scanners to guardrails:

  • All F5 guardrails will need to be updated manually. To update, click the Update available button next to each guardrail. The only change is to the name, not the functionality or efficacy, so there is no need for testing if these are in use.
  • Customers using the verbose configuration in the scans api will need to update their code to use the new guardrail names.
  • Bookmarked product URLs may need to be re-created.

AI Guardrails

Identify live projects

Users can now mark projects as “live” to distinguish test or development projects from customer-facing or production applications.

  • The live flag is viewable in both the table and card view of projects.
  • Users can filter the view to only show live projects.
  • The live flag is shown whenever you need to select a project from a list – for example, while using guardrails access controls.

AI Red Team

CASI leaderboard and trend data

Comprehensive AI Security Index (CASI) leaderboard data are now available in the AI Red-Team product. CASI is a benchmark metric designed to help you understand exactly how secure a model is by measuring its ability to withstand common prompt injections and jailbreak attacks. A higher CASI score indicates a more secure model. Every month, we test dozens of the most popular models using our latest signature attacks to identify their "defensive breaking point"—the minimum resources an attacker needs to successfully compromise them.

What this means for you:

  • Access to expert testing: Our latest test results are available in the product every month, giving you up-to-date security benchmarks.
  • Conserve your reports: Because we provide CASI scores for popular models, you don't have to run your own Red-Team reports to get these benchmarks. This allows you to save your reports for custom models and specific AI applications.
  • Compare model families: Compare security scores within model families to decide which version is right for your project.
  • Track security trends: View month-over-month trend data to see if a model's security is improving or declining over time.
  • Measure integration impact: See exactly how your application’s architecture affects model security by comparing raw scores to your "in-the-wild" results.

A spoonful of sugar...

The March signature attack pack of 10,000 new malicious prompts introduces a sweet and deadly new attack vector, sugar-coated poison. This is a two-stage attack that tricks AI models by starting with a friendly conversation before introducing harmful requests. The attack first replaces harmful words in a prompt with their opposites—like "secure" instead of "attack"—to bypass safety filters. Once the model begins a helpful, safe response, the attacker uses the model's own progress to slip in a malicious request that the AI might no longer recognize as a threat.

Bug Fixes

  • In the Attack campaigns table, reports that were scheduled for the future were incorrectly showing a last run time. Resolution: the expected timestamp is now shown.
  • Changing a recurring schedule Red Team report to run only once wasn't working as expected. Resolution: fixed.
  • When creating a personal API token, the confirmation toast message said a global API token had been created. Resolution: The message has been corrected.
  • Saving a custom role while the role name was in an edit state would create two copies of the role. Resolution: Fixed.
  • The Custom guardrails list did not update without a screen refresh, after editing a package edit. Resolution: After saving, the guardrails list is updated with the correct package changes.
  • In the project detail view, the browser back button didn't correctly navigate users back to Projects. Resolution: Fixed.
  • There was a typo in the Reports filter dropdown for status. Resolution: Fixed.
  • On the Dashboard, the Usage trends table had squashed text. Resolution: Text is unsquashed.
  • In the Playground, users uploading a dataset weren't able to see the complete list of guardrails and packages. Resolution: Fixed.
  • We improved how errors are handled when creating a guardrails package.
  • Under some conditions, an admin user with the Users permission would see an incorrect error when inviting other users into an organization. Resolution: Fixed.

Known issues

  • Canceled reports can sometimes get stuck in the "canceling" status. The job is canceled as expected, but the UI status doesn't refresh.

Updated at: