DevRel newsletter — February 2026

Karina_Blog_Header_Template_720x420_(39).png

Hello from the Elastic DevRel team! In this newsletter, we cover version 9.3 of Elasticsearch and the Elastic Stack, the latest blogs and videos, and upcoming events, including the Agent Builder hackathon.

What’s new?

Several features that were introduced in earlier releases are now generally available in version 9.3 of Elasticsearch and the Elastic Stack, including:

  • Elastic Agent Builder, for building AI agents that reason over Elasticsearch data

  • Pattern-based log compression (pattern_text), reducing log storage costs

  • Entity AI Summary in Elastic Security

  • Expanded Elastic Inference Service (EIS) model availability, including Jina models

9.3 also introduces improvements in vector storage, GPU acceleration, and analytics performance.

Elastic Workflows to build automation within the Elastic Stack

Elastic Workflows is an automation engine built into Kibana. You define workflows in YAML — what triggers (starts) them, what steps they take, and what actions they perform — and the platform handles execution. A workflow can query Elasticsearch, transform data, branch based on conditions, call external APIs, and integrate with services like Slack, Jira, PagerDuty, and more through connectors you've already configured.

Video thumbnail

A workflow is composed of a few key parts: triggers, inputs, and steps.

  • Triggers determine when a workflow runs. A workflow can have multiple triggers.

    • An alert trigger runs when a Kibana alerting rule fires with full access to the alert context. 

    • A scheduled trigger runs on an interval or cron pattern. 

    • A manual trigger runs on demand from the UI or API. 

  • Inputs define parameters that can be passed to the workflow at runtime. These let you create reusable workflows that accept different values depending on how they're invoked.

  • Steps are the actions a workflow takes. They execute in sequence, and each step can reference outputs from previous steps. Step types include:

    • Internal actions for actions you perform inside Elasticsearch and Kibana like querying indices, running Elasticsearch Query Language (ES|QL), creating cases, or updating alerts.

    • External actions for actions you perform on external systems like sending a Slack message or creating a Jira ticket. Use any connector you've configured in Elastic with the flexibility to hit any API or internal service using HTTP steps.

    • Flow control for defining the logic of your workflow with conditionals, loops, and parallel execution.

    • AI for everything from prompting a large language model (LLM) to enabling agents as workflow steps, unlocking agentic workflow use cases.

bfloat16 support for dense vectors

Elasticsearch 9.3 adds support for storing dense vectors using bfloat16 instead of 32-bit floating point values. This reduces vector storage roughly by half and lowers memory pressure while preserving enough precision for many semantic search and retrieval augmented generation (RAG) workloads.

This is especially useful for:

  • High-dimensional embeddings (e.g., 768 or more dimensions)

  • Large vector collections

  • Deployments constrained by memory or disk footprint

Creating an index with bfloat16 vectors:

PUT my-vector-index
{
  "mappings": {
    "properties": {
      "embedding": {
        "type": "dense_vector",
        "dims": 768,
        "element_type": "bfloat16",
        "index": true
      }
    }
  }
}

Indexing documents does not change compared to float-based vectors:

POST my-vector-index/_doc
{
  "embedding": [0.01234567,-0.98123456,0.44319876, "..."]
}

kNN queries use the same syntax as before, with reduced precision handled transparently:

POST my-vector-index/_search
{
  "knn": {
    "field": "embedding",
    "query_vector": [0.02123456,-0.97345678,0.41765432,"..."],
    "k": 10,
    "num_candidates": 30
  }
}

bfloat16 vectors can be combined with disk-based vector indexing and on-disk rescoring to further reduce memory usage for large-scale semantic search workloads.

GPU acceleration for vector indexing and inference

Elastic 9.3 continues to expand GPU support for vector-heavy workloads.

For self-managed deployments, GPU-accelerated vector indexing (technical preview) allows indexing and maintenance tasks to run on NVIDIA GPUs powered by cuVS.Observed improvements include:

  • Up to 12x higher vector indexing throughput

  • Up to 7x faster force-merge operations

  • Significantly reduced CPU utilization during heavy ingestion

These improvements help reduce time-to-search when building or rebuilding large vector indices.

On the inference side, Elastic Inference Service (EIS) continues to use managed GPU infrastructure for embedding generation and reranking. This allows users to benefit from GPU acceleration without deploying or operating GPUs inside their own Elasticsearch clusters.

ES|QL: Faster and more stable time-series analytics

In 9.3, improvements focus on metrics performance and dashboard stability, including:

  • Sliding-window aggregations, reducing jitter in time-series visualizations

  • Faster execution paths for metrics queries

  • Native support for exponential histograms

For example, smoothing a request rate over time can now be expressed directly in ES|QL:

TS metrics
| WHERE TRANGE(1h)
| STATS avg(rate(requests, 10m)) BY TBUCKET(1m), host

These changes make ES|QL better suited for always-on dashboards and operational analytics, not just exploratory queries.

Upcoming events

Join the Agent Builder hackathon to work with the community to build context-driven, multi-step AI agents. Compete for a share of $20,000 in prizes and get featured on our blog and social channels — deadline February 27.

Elastic{ON} Tour, the one-day Elastic conference series around the world, is back. Register and join us in:

  • São Paulo March 5, 2026

  • Sydney March 5, 2026

  • Tokyo March 10, 2026

  • Singapore March 17, 2026

  • Washington, D.C. March 19, 2026

Join your local Elastic User Group chapter for the latest news on upcoming events! You can also find us on Meetup.com. If you’re interested in presenting at a meetup, send an email to meetups@elastic.co.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.