Loading

VM configurations on GCP

For Google Cloud infrastructure upgrade, we use standard and custom machine types unique to Google Cloud to fit the right blend of RAM:CPU:Disk. To accommodate these configurations, we use the following nomenclature to easily identify each VM type.

For example:

Using standard Instance ID / SKU: gcp.es.datahot.c4a.highcpu

gcp.* Denotes the cloud provider, GCP in this case.
\*.es.datahot.* Denotes that this configuration is an Elasticsearch (es) cluster component that serves as a data node for hot content. Other options may be datawarm, datacold, datafrozen for data nodes, and kibana, master, and so on for other components.
\*.c4a.* Denotes that this configuration is running on the GCP C4a family.

The following table details the configurations for data nodes and compares them with prior naming conventions where applicable.

New config name Notes
gcp.es.datahot.n2.68x10x45 This configuration replaces “highio”, which was based on N1 with 1:30 RAM:disk and similar RAM:CPU ratios.
gcp.es.datahot.n2.68x10x95 This configuration is similar to the first, but with more disk space to allow for longer retention in ingest use cases, or larger catalog in search use cases.
gcp.es.datahot.n2.68x16x45 This configuration replaces “highcpu”, which was based on N1 with 1:8 RAM:disk and similar RAM:CPU ratios.
gcp.es.datahot.c4a.highcpu This is a new configuration which is powered by GCP custom Axion processors. This offers a better price-performance over previous generation instances.
gcp.es.datahot.n2.68x32x45 This configuration provides double the CPU cores compared to “gcp.es.datahot.n2.68x16x45” config.
gcp.es.datahot.n2d.64x8x11 This is a new configuration powered by AMD processors which offers a better price-performance compared to comparable Intel processors.
gcp.es.datawarm.n2.68x10x190, gcp.es.datacold.n2.68x10x190 These configurations replace “highstorage”, which was based on N1 with 1:160 RAM:disk and similar RAM:CPU ratios.
gcp.es.datafrozen.n2.68x10x90 This configuration is powered by Intel-based processors.

For a detailed price list, check the Elastic Cloud deployment pricing table. For a detailed specification of the new configurations, check Elasticsearch Service default GCP instance configurations.

In addition to data nodes for storage and search, Elasticsearch nodes also have machine learning nodes, master nodes, and coordinating nodes. These auxiliary node types along with application nodes such as APM servers and Kibana have also been upgraded to the latest instance types. Both auxiliary node and application node configurations are based on Elasticsearch data node configuration types shown in the previous table.

New config name Notes
gcp.es.master.c4a.highcpu This is a new configuration that is similar to “gcp.es.datahot.c4a.highcpu” config.
gcp.es.master.n2.68x32x45 This configuration is similar to “gcp.es.datahot.n2.68x32x45” config.
gcp.es.ml.n2.68x32x45 This configuration is similar to “gcp.es.datahot.n2.68x32x45” config.
gcp.es.coordinating.n2.68x16x45 This configuration is similar to the “gcp.es.datahot.n2.68x16x45” config.
gcp.kibana.c4a.highcpu This is a new configuration that is similar to “gcp.es.datahot.c4a.highcpu” config.
gcp.kibana.n2.68x32x45 This configuration is similar to “gcp.es.datahot.n2.68x32x45” config.
gcp.apm.c4a.highcpu or gcp.integrationsserver.c4a.highcpu This is a new configuration that is similar to “gcp.es.datahot.c4a.highcpu” configuration.
gcp.apm.n2.68x32x45 or gcp.integrationsserver.n2.68x32x45 This configuration is similar to “gcp.es.datahot.n2.68x32x45” config.