Monday, August 14, 2023

Transfer OCI Audit logs to SIEM tooling

 

Goal

We want to transfer the audit logs from Oracle Cloud Infrastructure (OCI) to the Logz.io SIEM tool.

OCI audits all CRUD operations on objects in the tenancy. All the Create, Update and Delete actions (on any resource) will be captured and sent to the logz.io tool. SELECT (Read) operations are ignored for now.

Unexpectedly, this proved to be harder than the documentation wants you to believe. We did this for "logz.io" as SIEM target, but obviously this can be customized to suit your needs.

Technical flow 

  1. Service Connector “Logfile_audit_archive_connector
    • Searches audit logfiles for any POST, PUT or DELETE action
    • Sends the resulting logfile to a bucket
  2. Bucket “logfile_download_bucket
    • Receives the audit log files
    • Emits an event on creation of a new object
    • Deletes the logfiles after 3 days
  3. Events Service Rule “Process audit file from audit bucket
    • Matches “Object - Create” event for the bucket
    • Calls OCI Function
  4. OCI Function “logzio-from-bucket”
    • Created on application “logzio”
    • Custom function that retrieves the audit file from the bucket, transforms it to a usable format and sends it to logzio

Issues found

Several issues were encountered when designing and testing this concept. Some choices in this setup are a result of having to cerate a workaround.

  • A Service connector directly from Event Logging to a Notification did not work, because the logging from that service did not contain the details of the user modifying the resource.
    • It seems that this service emits a “v1 JSON event”, whereas we need a “v2 JSON event” format (including the modifying user).
  • The Log Search Service Connector puts the logfile in gz format on the bucket
    • The pre-built function “Object Storage File Extractor” to unzip bucket files only operates on zip, not on gz files.
    • We had to write our custom gunzip part in the function.
  • Logs are stored in minimized JSON format (without the brackets for a collection and without commas).
    • These are not stored on a single line per record, but on multiple lines, and having the curly brackets separating two records on the same line. This is a problem.
    • The JSON converter in Python does not understand the OCI format, so you will need to modify it to get it in a format that Python understands.
  • A simple OCI notification to call Logz.IO cannot be used, as the HTTP(s) does not allow the use of the required token on the URL.
  • Logz.IO expects a minimized JSON format (one record per line, no commas, no brackets for a collection).
    • This is only slightly different than what OCI has, but converting between the two proved a challenge.

Technical details – Bucket

To store the (temporary) audit log files, we create a bucket. The compartment for this bucket can be any compartment, but we recommend using a compartment that contains generic or maintenance related resources.

The bucket “logfile_download_bucket” is created as a private bucket in the Standard Storage tier. Important: enable “Emit Object Events” for this bucket.

Optional: Create a lifecycle rule to delete audit logs older than x days.

Technical details – OCI Application

Create an application to hold your function(s):

  1. From the hamburger menu: Developer Services - Functions - Applications
  2. Select “Create application”
    • name = “logzio
    • VCN = “<yourVCN>
    • subnet = “<yourPrivateSubnet>

 Optional: create a separate user to administer the functions:

  • Create the user
    • From the hamburger menu: Identity & security – Identity - users
  • Select “Create User
    • Select “IAM User
    • Name = "logzio_user"
  • Edit the user after creation
    • Edit User Capabilities
      • Select only "auth token"
  • Select “Auth Tokens” from the menu on the left
    • Select “Generate Token
    • Save the generated value for later (it will only be displayed once)
  • Create a group
    • From the hamburger menu: Identity & security – Identity – Groups
  • Select “Create Group
    • Name = “function_admins
  • Select “Add User to Group”
    • Add the user “logzio_user” to the new group
  • Create a policy
    • From the hamburger menu: Identity & security – Identity – Policies
  • Select “Create Policy
    • Name = “Functions_Repo_Policy
    • Compartment = “<root>
    • Policy statements
      • Allow group function_admins to read objectstorage-namespaces in tenancy
      • Allow group function_admins to manage repos in tenancy

Create the function

Your application “logzio” has an entry “Getting Started” in the menu on the left of the application page. Follow these steps with some modifications:

  1. Launch Cloud shell
  2. Use the context for your region
    • fn list context
    • fn use context eu-amsterdam-1
  3. Update the context with the function's compartment ID
    • fn update context oracle.compartment-id <your Compartment OCID>
  4. Provide a unique repository name prefix
    • fn update context registry <region-key>.ocir.io/<tenancy-namespace>/logzio
  5. Generate an auth token
    • Already done with separate user “logzio_user”
    • If you did not create a separate user in the previous steps, generate an Auth Token for your personal user
  6. Login to the registry
    • docker login -u '<tenancy-namepsace>/logzio_user' <region-key>.ocir.io
    • password = Token from earlier step
  7. Verify the setup
    • fn list apps
  8. Generate a 'hello-world' boilerplate function
    • fn init --runtime python logzio-from-bucket
  9. Switch into the generated directory
    • cd logzio-from-bucket

You can now modify the “func.py” and “requirement.txt” files to set up the actual function, or you can continue with the deployment of this template function to test it out.

  1. Deploy your function
    • fn -v deploy --app logzio
  2. Invoke your function
    • fn invoke logzio logzio-from-bucket
Under "Configuration" in the OCI console for this function, you can add key-value pairs. Add one for the bucket:
  • input-bucket = logfile_download_bucket

Technical details – OCI function

Modify the file “requirements.txt”:

fdk>=0.1.59

oci

requests

Modify the file “func.py” (disclaimer: this is PoC code, so use at your own peril 😉)

import io

import json

import logging

import oci

import gzip

import requests

 

from fdk import response

 

def handler(ctx, data: io.BytesIO = None):

    input_bucket = ""

    try:

        cfg = ctx.Config()

        input_bucket = cfg["input-bucket"]

        logzio_url   = cfg["logzio-url"]

        logzio_token = cfg["logzio-token"]

    except (Exception) as e:

        logging.getLogger().info('Error getting context details: ' + str(e))

        return response.Response(

            ctx, response_data=json.dumps(

                {"message": "Error getting context details: " + str(e) }),

            headers={"Content-Type": "application/json"}

            )

    try:

        body = json.loads(data.getvalue())

        object_name = body["data"]["resourceName"]

        namespace = body["data"]["additionalDetails"]["namespace"]

    except Exception as e:

        return response.Response(

            ctx, response_data=json.dumps(

                {"message": ": ERROR: During get event details: " + str(e) }),

            headers={"Content-Type": "application/json"}

            )

    signer = oci.auth.signers.get_resource_principals_signer()

    client = oci.object_storage.ObjectStorageClient(config={}, signer=signer)

    try:

        audit_data = client.get_object(namespace, input_bucket, object_name)

        audit_bytesio = io.BytesIO(audit_data.data.content)

        z = gzip.GzipFile(fileobj=audit_bytesio,mode='rb')

        audit_data_text = z.read()

        z.close()

    except Exception as e:

        logging.getLogger().info("ERROR: During load data: " + str(e))

        raise       

    try:

        url_string    = 'https://'+logzio_url+'/?token='+logzio_token+'&type=http-bulk'

        data_string = '[' + audit_data_text.decode('utf-8') + ']'

        data_string = data_string.replace('\n','\n,')

        data_string = data_string.replace('}{','},{')

        json_string = json.loads(data_string)

        logzio_string = ''

        for record in json_string:

            logzio_string += json.dumps(record) + '\n'

    except Exception as e:

        logging.getLogger().info("ERROR: During JSON formatting: " + str(e))

        raise

    try:

        resp = requests.post(url_string, data=logzio_string)

        if resp.status_code != 200:

            logging.getLogger().info(resp.text)

            raise Exception("Unexpected HTTP status code received")

    except Exception as e:

        logging.getLogger().info("ERROR: During LogzIO HTTP call: " + str(e))

        raise

    return response.Response(

        ctx, response_data=json.dumps(

            {"message": "Success"}),

        headers={"Content-Type": "application/json"}

    )

Redeploy the application

Technical details – Service Connector

With the bucket in place, create a Service Connector to place the audit logfiles.

  • From the hamburger menu: Observability & Management – Logging – Service Connectors
  • Select “Create Service Connector”
    • Connector Name = “Logfile_audit_archive_connector”
    • Resource compartment = “<yourCompartment>
    • Source = “Logging
      • Compartment = “<root>
      • Log Group = “_Audit
        • Check “Include _Audit in subcompartments
      • Query code editor = “search "<tenancyOCID>/_Audit_Include_Subcompartment" | (data.request.action='POST' or data.request.action='PUT' or data.request.action='DELETE')
    • Target = “Object Storage”
      • Compartment = “<yourCompartment>
      • Bucket = “logfile_download_bucket
      • Object Name Prefix = “audit
The Service Connector will create a policy for itself to access the right resources.

Technical details – Rule

Last step is to connect the bucket to the function using an Events Service Rule.

  •  From the hamburger menu: Observability & Management – Events Service – Rules
  • Select “Create Rule
    • Display Name = “Process audit file from audit bucket
    • Rule 1
      • Condition = “Event Type
      • Service Name = “Object Storage
      • Event Type = “Object - Create
    • Rule 2
      • Condition = “Attribute
      • Attribute Name = “CompartmentName
      • Attribute Values = “<yourCompartment>
    • Rule 3
      • Condition = “Attribute
      • Attribute Name = “bucketName
      • Attribute Values = “logfile_download_bucket
    • Actions
      • Action Type = “Function
      • Function Compartment = “<yourCompartment>
      • Function Application = “logzio
      • Function = “logzio-from-bucket
After this step, logs should start flowing from Audit logs to your SIEM tooling.

Flow visualization