Ingest Logs from Google Kubernetes Engine - Administrator Guide - Cortex XDR - Cortex - Security Operations

Cortex XDR Pro Administrator Guide

Product
Cortex XDR
License
Pro
Creation date
2024-07-16
Last date published
2024-11-07
Category
Administrator Guide
Retire_Doc
Retiring
Link_to_new_Doc
/r/Cortex-XDR/Cortex-XDR-Documentation
Abstract

Forward your Google Kubernetes Engine (GKE) logs directly to Cortex XDR using Elasticsearch Filebeat.

Notice

Ingesting logs and data requires a Cortex XDR Pro per GB license.

Instead of forwarding Google Kubernetes Engine (GKE) logs directly to Google StackDrive, Cortex XDR can ingest container logs from GKE using Elasticsearch Filebeat. To receive logs, you must install Filebeat on your containers and enable Data Collection settings for Filebeat.

After Cortex XDR begins receiving logs, the app automatically creates an Cortex Query Language (XQL) dataset using the vendor and product name that you specify during Filebeat setup. It is recommended to specify a descriptive name. For example, if you specify google as the vendor and kubernetes as the product, the dataset name will be google_kubernetes_raw. If you leave the product and vendor blank, Cortex XDR assigns the dataset a name of container_container_raw.

After Cortex XDR creates the dataset, you can search your GKE logs using XQL Search.

  1. Install Filebeat on your containers.

    For more information, see https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html.

  2. Ingest Logs from Elasticsearch Filebeat.

    Record your token key and API URL for the Filebeat Collector instance as you will need these later in this workflow.

  3. Deploy a Filebeat as a DaemonSet on Kubernetes.

    This ensures there is a running instance of Filebeat on each node of the cluster.

    1. Download the manifest file to a location where you can edit it.

      curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/filebeat-kubernetes.yaml

    2. Open the YAML file in your preferred text editor.

    3. Remove the cloud.id and cloud.auth lines.

      gke-filebeat-cloud.id-remove.png
    4. For the output.elasticsearch configuration, replace the hosts, username, and password with environment variable references for hosts and api_key, and add a field and value for compression_level and bulk_max_size.

      filebeat-elasticsearch-env-vars.png
    5. In the DaemonSet configuration, locate the env configuration and replace ELASTIC_CLOUD_AUTH, ELASTIC_CLOUD_ID, ELASTICSEARCH_USERNAME, ELASTICSEARCH_PASSWORD, ELASTICSEARCH_HOST, ELASTICSEARCH_PORT and their relative values with the following.

      • ELASTICSEARCH_ENDPOINT—Specify the API URL for your Cortex XDR tenant. You can copy the URL from the Filebeat Collector instance you set up for GKE in the Cortex XDR management console (Settings (gear.png)ConfigurationsData CollectionCustom CollectorsCopy API URL. The URL will include your tenant name (https://api-tenant external URL:443/logs/v1/filebeat)

      • ELASTICSEARCH_API_KEY—Specify the token key you recorded earlier during the configuration of your Filebeat Collector instance.

      After you configure these settings your configuration should look like the following image.

      gke-filebeat-env-config.png
    6. Save your changes.

  4. If you use RedHat OpenShift, you must also specify additional settings.

    See https://www.elastic.co/guide/en/beats/filebeat/7.10/running-on-kubernetes.html.

  5. Deploy Filebeat on your Kubernetes.

    kubectl create -f filebeat-kubernetes.yaml

    This deploys Filebeat in the kube-system namespace. If you want to deploy the Filebeat configuration in other namespaces, change the namespace values in the YAML file (in any YAML inside this file) and add -n <your_namespace>.

    After you deploy your configuration, the Filebeat DameonSet runs throughout your containers to forward logs to Cortex XDR. You can review the configuration from the Kubernetes Engine console: WorkloadsFilebeatYAML.

    Note

    Cortex XDR supports logs in single line format or multiline format. For more information on handling messages that span multiple lines of text in Elasticsearch Filebeat, see Manage Multiline Messages.