Skip to content
  • There are no suggestions because the search field is empty.

April '26 Release Notes

Product release notes for April 2026

New Features and Models

  • Composite App: Configure multiple UI app plugins into a single, unified experience for end-users. For example, combine Vertesia’s latest AI Assistant with one or more custom-built apps into a composite application. Control the layout, navigation, logo, theme, and more.
  • Agent Skills: From content authoring to fully orchestrating and managing parallel sub-agents, we’ve added dozens of skills your AI agents need to tackle the most complex tasks. We’ve added both built-in skills as well as optional skills administered through the Vertesia Tools & Skills app plugin.

  • New Vertesia Tools: We’ve added over 50 new tools for your agents to automate work from start to finish. The full list of tools is available in Studio as either built-in tools or as optional tools added by installing the Vertesia Tools & Skills app plugin
  • Model Context Protocol (MCP) Support: Seamlessly connect Vertesia to your third-party applications, letting agents take action across your enterprise application ecosystem. Connect agents to Microsoft 365 (Outlook, Teams, OneDrive, Word, Excel, SharePoint), Google Workspace (Gmail, Drive, Docs, Sheets, Slides), GitHub, Atlassian, Notion, Miro, and many more via MCP.
  • In-Code Interactions and Content Types: App plugins can contribute their own interactions and content types programmatically, enabling richer integration and simplifying app deployment across one or multiple Vertesia organizations and projects.
  • Agentic Databases and Dashboards: Gaining insights from your data has never been easier. Vertesia's Database and Dashboard Assistants can analyze your data, create and load it into physical data models, and do all the dashboarding for you based on natural langague.
  • Cost & Usage Analytics: You can now track and analyze LLM usage costs across your organization when using Google Vertex AI or Amazon Bedrock. Group spending by Vertesia project, agent, environment, model, or time period.
  • Prompt Caching: We've added prompt caching support for Claude on Google Vertex AI and Amazon Bedrock. Claude prompt caching is auto-enabled in conversation workflows, reducing costs and latency on repeated or structured prompts.

  • Elasticsearch Indexing and Search: Elasticsearch has been auto-enabled and indexed on active projects, improving content search performance and functionality. Admins can manually enable and index Elasticsearch on inactive projects from the settings.
  • Project Model Defaults: We've expanded the options for setting default models at the project level, from a single default to a more flexible, extensible map. This allows configuring different models for various categories, such as document intake and content type generation.

  • Bedrock API Key Authentication: Amazon Bedrock environments now support API key (bearer token) authentication as an alternative to Workload Identity Federation, simplifying environment setup when working with direct API keys.

  • Custom HTTP Headers for OpenAI-Compatible Environments: OpenAI-compatible model environments now support injecting custom HTTP headers per request, enabling compatibility with Apigee proxies, custom authentication middleware, and enterprise API gateways. 
  • OpenAI Image Generation: OpenAI's gpt-image and DALL-E models are now supported for image generation, expanding Vertesia's image generation capabilities beyond Google and Amazon models.
  • DeepSeek R1: DeepSeek R1 is now available as a supported model.
  • Regional SaaS: Vertesia's multi-cloud SaaS is now operated in multiple regions around the world including the United States, Europe, and Japan, offering customers more options and meeting regional data residency requirements. 

Improvements

  • AI Assistant App: A new version of the AI Assistant app is available. The UI/UX has been improved to incorporate user feedback and support new features like inline charts and MCP, making the app more intuitive and improving the overall user experience.
  • Interaction UI/UX: The Studio navigation and pages have been reorganized and refined to provide a more intuitive and streamlined interface. Interactions are now spilt between agents that run autonomously and (single) calls to an LLM, including the related runs, configuration, and analytics.
  • Agent Observability: The new observability provides as simplified view of an agent runs, including the agent hierarchy, filterable table of activities, model call details, context window evolution, along with other tools for analyzing agent runs.
  • Agent Analytics: New analytics are available for agents, including dashboards for monitoring costs, token usage, latency, errors, and tool use. Analytics are available across all agents, by agent name, agent run, environment, model and more.
  • Clone Agent Conversation: Agent conversations can be cloned to start a new agent run, preserving the existing conversation and enabling users to explore new outcomes.

  • Continue Agent Conversation: Agent conversations that are no longer running can be continued, restarting the agent conversation from where it left off.
  • Monaco Code Folding: The Monaco code editor now supports heading-based code folding, letting you collapse sections under markdown headings for easier navigation of large prompts.

  • Project Default Model for In-Code Interactions: Project-level default model settings now apply to interactions defined in code, ensuring consistent model selection across all interaction types in your project.
  • Load Balancer Failover Configuration: Load balancer environments now support an explicit failover option. When enabled, failed requests will retry other models in weighted order until one succeeds or all models have been tried. When disabled, the environment functions as a 

    a pure load balancer with no fault tolerance, better suited for running in workflows with retries.

  • OpenRouter Support: The OpenAI Compatible provider now supports environments using OpenRouter, enabling access to LLMs across dozens of inference providers. 

Bug Fixes

  • Media Container Intake: A new media container intake workflow automatically detects whether a file contains video or audio streams before routing it to the correct intake pipeline. This resolves crashes that previously occurred with audio-only MP4 files.

Deprecated

  • Onboarding Automation: The option to auto-configure a new project with sample interactions and environments has been removed. Instead, in-code interactions provide built-in agents and environment setup has been simplified using API Keys.