Introduction

Fluentd is an open-source data collector designed for creating a unified logging layer. It enables seamless aggregation and distribution of data for improved data utilization and comprehension.

OpsRamp provides the capability to export logs from Fluentd, which is configured to gather logs from diverse sources.

Configuration for exporting logs to OpsRamp

Add the following configuration to your Fluentd configuration file to export logs to OpsRamp:

## These labels are mandatory fields as they are used as resource attributes at Opsramp
<filter *>
  @type record_transformer
  <record>
    source fluentd
    host "#{Socket.gethostname}"
  </record>
</filter>
  
## Http output with Opsramp Endpoint
<match *>
  @type http
  endpoint 
  open_timeout 2
  <format>
    @type json
  </format>
  json_array true
  add_newline false
  <buffer>
    flush_interval 10s
  </buffer>
</match>

Example 1: Fluentd configuration for exporting logs to OpsRamp

Below is a complete configuration example for Fluentd to export logs to OpsRamp and specify the source of the logs using the service label:

## File input
## Read opsramp agent logs continuously and tags opsramp.agent
<source>
  @type tail
  @id input_tail
  path /var/log/opsramp/agent.log
  tag opsramp.agent
  <parse>
    @type regexp
    expression (?<timestamp>\d*-\d*-\d*\s\d*:\d*\d*:\d*)\s*\S(?<level>\w*)\S\s*\S\w*\s(?<pid>\d*)\S\s*\S\w*\s(?<tid>\d*)\S\s*\S\w\S\s*\S(?<location>[^ ]*)\S\s*(?<message>.*)$
  </parse>
</source>

<filter opsramp.agent>
  @type record_transformer
  <record>
    source fluentd
    host "#{Socket.gethostname}"
  </record>
</filter>

## Http output
<match opsramp.agent>
  @type http
  endpoint 
  open_timeout 2
  <format>
    @type json
  </format>
  json_array true
  add_newline false
  <buffer>
  flush_interval 10s
  </buffer>
</match>

Example 2: Kubernetes configuration

Below is a complete configuration example for Fluentd to export logs to OpsRamp within a Kubernetes cluster:

<source>
  @type tail
  @id in_tail_container_logs
  @label @KUBERNETES
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format json
      time_key time
      time_type string
      time_format "%Y-%m-%dT%H:%M:%S.%NZ"
      keep_time_key false
    </pattern>
    <pattern>
      format regexp
      expression /^(?<time>.+) (?<stream>stdout|stderr)( (.))? (?<message>.*)$/
      time_format '%Y-%m-%dT%H:%M:%S.%N%:z'
      keep_time_key false
    </pattern>
  </parse>
  emit_unmatched_lines true
</source>

# expose metrics in prometheus format
<source>
  @type prometheus
  bind 0.0.0.0
  port 24231
  metrics_path /metrics
</source>

<label @KUBERNETES>
  <match kubernetes.var.log.containers.fluentd**>
    @type relabel
    @label @FLUENT_LOG
  </match>

  <filter kubernetes.**>
    @type kubernetes_metadata
    @id filter_kube_metadata
    skip_labels false
    skip_container_metadata false
    skip_namespace_metadata true
    skip_master_url true
  </filter>

  <match **>
    @type relabel
    @label @DISPATCH
  </match>
</label>

<label @DISPATCH>
  <filter **>
    @type prometheus
    <metric>
      name fluentd_input_status_num_records_total
      type counter
      desc The total number of incoming records
      <labels>
        tag ${tag}
        hostname ${hostname}
      </labels>
    </metric>
  </filter>

  <match **>
    @type relabel
    @label @OUTPUT
  </match>
</label>

<label @OUTPUT>
  <filter **>
    @type record_transformer
    enable_ruby
    <record>
      source fluentd
      host "#{Socket.gethostname}"
      kubernetes ${record["kubernetes"].to_json}
    </record>
  </filter>

  <filter **>
    @type parser
    key_name kubernetes
    reserve_data true
    remove_key_name_field true
    <parse>
      @type json
    </parse>
  </filter>

  <match **>
    @type http
    endpoint 
    open_timeout 2
    <format>
      @type json
    </format>
    json_array true
    add_newline false
    <buffer>
      flush_interval 10s
    </buffer>
  </match>
</label>

Before exporting the log events, ensure that the following attributes or fields are set using Fluentd configuration:

Resource attributes:

  • source
  • host
  • level (If not set, it is considered as “Unknown”)

Parsed labels:

  • message # Mandatory field to be set
  • timestamp ( If not set, the time the record received at Opsramp is considered as log record time )
  • level ( If not set, it’s considered as “Unknown” )

See Fluentd configuration for more details.