Forward your Google Kubernetes Engine (GKE) logs directly to Cortex XDR using Elasticsearch Filebeat.
Notice
Ingesting logs and data requires a Cortex XDR Pro per GB license.
Instead of forwarding Google Kubernetes Engine (GKE) logs directly to Google StackDrive, Cortex XDR can ingest container logs from GKE using Elasticsearch Filebeat. To receive logs, you must install Filebeat on your containers and enable Data Collection settings for Filebeat.
After Cortex XDR begins receiving logs, the app automatically creates an Cortex Query Language (XQL) dataset using the vendor and product name that you specify during Filebeat setup. It is recommended to specify a descriptive name. For example, if you specify google
as the vendor and kubernetes
as the product, the dataset name will be google_kubernetes_raw
. If you leave the product and vendor blank, Cortex XDR assigns the dataset a name of container_container_raw
.
After Cortex XDR creates the dataset, you can search your GKE logs using XQL Search.
Install Filebeat on your containers.
For more information, see https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html.
Ingest Logs from Elasticsearch Filebeat.
Record your token key and API URL for the Filebeat Collector instance as you will need these later in this workflow.
Deploy a Filebeat as a DaemonSet on Kubernetes.
This ensures there is a running instance of Filebeat on each node of the cluster.
Download the manifest file to a location where you can edit it.
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/filebeat-kubernetes.yaml
Open the YAML file in your preferred text editor.
Remove the
cloud.id
andcloud.auth
lines.For the
output.elasticsearch
configuration, replace thehosts
,username
, andpassword
with environment variable references forhosts
andapi_key
, and add a field and value forcompression_level
andbulk_max_size
.In the
DaemonSet
configuration, locate theenv
configuration and replaceELASTIC_CLOUD_AUTH
,ELASTIC_CLOUD_ID
,ELASTICSEARCH_USERNAME
,ELASTICSEARCH_PASSWORD
,ELASTICSEARCH_HOST
,ELASTICSEARCH_PORT
and their relative values with the following.ELASTICSEARCH_ENDPOINT
—Specify the API URL for your Cortex XDR tenant. You can copy the URL from the Filebeat Collector instance you set up for GKE in the Cortex XDR management console ( → → → → . The URL will include your tenant name (https://api-tenant external URL:443/logs/v1/filebeat)
ELASTICSEARCH_API_KEY
—Specify the token key you recorded earlier during the configuration of your Filebeat Collector instance.
After you configure these settings your configuration should look like the following image.
Save your changes.
If you use RedHat OpenShift, you must also specify additional settings.
See https://www.elastic.co/guide/en/beats/filebeat/7.10/running-on-kubernetes.html.
Deploy Filebeat on your Kubernetes.
kubectl create -f filebeat-kubernetes.yaml
This deploys Filebeat in the kube-system namespace. If you want to deploy the Filebeat configuration in other namespaces, change the namespace values in the YAML file (in any YAML inside this file) and add
-n <your_namespace>
.After you deploy your configuration, the Filebeat DameonSet runs throughout your containers to forward logs to Cortex XDR. You can review the configuration from the Kubernetes Engine console: → → .
Note
Cortex XDR supports logs in single line format or multiline format. For more information on handling messages that span multiple lines of text in Elasticsearch Filebeat, see Manage Multiline Messages.
After Cortex XDR begins receiving logs from GKE, you can use the XQL Search to search for logs in the new dataset.