Loading...
Loading...
Comprehensive toolkit for generating best practice Fluent Bit configurations. Use this skill when creating new Fluent Bit configs, implementing log collection pipelines (INPUT, FILTER, OUTPUT sections), or building production-ready telemetry configurations.
npx skill4agent add akin-ozer/cc-devops-skills fluentbit-generator# REQUIRED: Run --help to check if your use case is supported
python3 scripts/generate_config.py --helpgenerate_config.py# Generate configuration for a supported use case
python3 scripts/generate_config.py --use-case kubernetes-elasticsearch --output fluent-bit.conf
python3 scripts/generate_config.py --use-case kubernetes-opentelemetry --cluster-name my-cluster --output fluent-bit.conf--helpexamples/examples/kubernetes-elasticsearch.confexamples/kubernetes-loki.confexamples/kubernetes-opentelemetry.confexamples/application-multiline.confexamples/syslog-forward.confexamples/file-tail-s3.confexamples/http-input-kafka.confexamples/multi-destination.confexamples/prometheus-metrics.confexamples/lua-filtering.confexamples/stream-processor.confexamples/full-production.confexamples/parsers.confUse mcp__context7__resolve-library-id with "fluent-bit" or "fluent/fluent-bit"
Then use mcp__context7__get-library-docs with:
- context7CompatibleLibraryID: /fluent/fluent-bit-docs (or /fluent/fluent-bit)
- topic: The plugin name and configuration (e.g., "elasticsearch output configuration")
- page: 1 (fetch additional pages if needed)Search query patterns:
"fluent-bit" "<plugin-type>" "<plugin-name>" "configuration" "parameters" site:docs.fluentbit.io
Examples:
"fluent-bit" "output" "elasticsearch" "configuration" site:docs.fluentbit.io
"fluent-bit" "filter" "kubernetes" "configuration" site:docs.fluentbit.io
"fluent-bit" "parser" "multiline" "configuration" site:docs.fluentbit.io[SERVICE]
# Flush interval in seconds - how often to flush data to outputs
# Lower values = lower latency, higher CPU usage
# Recommended: 1-5 seconds for most use cases
Flush 1
# Daemon mode - run as background process (Off in containers)
Daemon Off
# Log level: off, error, warn, info, debug, trace
# Recommended: info for production, debug for troubleshooting
Log_Level info
# Optional: Write Fluent Bit's own logs to file
# Log_File /var/log/fluent-bit.log
# Parser configuration file (if using custom parsers)
Parsers_File parsers.conf
# Enable built-in HTTP server for metrics and health checks
# Recommended for Kubernetes liveness/readiness probes
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
# Enable storage metrics endpoint
storage.metrics on
# Number of worker threads (0 = auto-detect CPU cores)
# Increase for high-volume environments
# workers 0infodebug[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
# Exclude Fluent Bit's own logs to prevent loops
Exclude_Path /var/log/containers/*fluent-bit*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 50MB
Skip_Long_Lines On
Refresh_Interval 10
Read_from_Head OffPathTagParserDBMem_Buf_LimitSkip_Long_LinesRead_from_Head[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On[INPUT]
Name http
Tag app.logs
Listen 0.0.0.0
Port 9880
Buffer_Size 32KB[INPUT]
Name forward
Tag forward.*
Listen 0.0.0.0
Port 24224Mem_Buf_LimitDBTagExclude_PathSkip_Long_Lines[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
Labels On
Annotations Off
Buffer_Size 0Merge_LogKeep_LogK8S-Logging.ParserK8S-Logging.ExcludeLabelsAnnotations[FILTER]
Name parser
Match *
Key_Name log
Parser json
Reserve_Data On
Preserve_Key Off[FILTER]
Name grep
Match *
# Include only error logs
Regex level (error|fatal|critical)
# Exclude health check logs
Exclude path /health[FILTER]
Name modify
Match *
Add cluster_name production
Add environment prod
Remove _p[FILTER]
Name nest
Match *
Operation lift
Nested_under kubernetes
Add_prefix k8s_[FILTER]
Name multiline
Match *
multiline.key_content log
multiline.parser java, python, go[FILTER]
Name throttle
Match *
Rate 1000
Window 5
Interval 1m[FILTER]
Name lua
Match *
script /fluent-bit/scripts/filter.lua
call process_record/fluent-bit/scripts/filter.luafunction process_record(tag, timestamp, record)
-- Add custom field
record["custom_field"] = "custom_value"
-- Transform existing field
if record["level"] then
record["severity"] = string.upper(record["level"])
end
-- Filter out specific records (return -1 to drop)
if record["message"] and string.match(record["message"], "DEBUG") then
return -1, timestamp, record
end
-- Return modified record
return 1, timestamp, record
endKubernetesgrep[OUTPUT]
Name es
Match *
Host elasticsearch.default.svc
Port 9200
# Index pattern with date
Logstash_Format On
Logstash_Prefix fluent-bit
Retry_Limit 3
# Buffer configuration
storage.total_limit_size 5M
# TLS configuration
tls On
tls.verify Off
# Authentication
HTTP_User ${ES_USER}
HTTP_Passwd ${ES_PASSWORD}
# Performance tuning
Buffer_Size False
Type _doc[OUTPUT]
Name loki
Match *
Host loki.default.svc
Port 3100
# Label extraction from metadata
labels job=fluent-bit, namespace=$kubernetes['namespace_name'], pod=$kubernetes['pod_name'], container=$kubernetes['container_name']
label_keys $stream
# Remove Kubernetes metadata to reduce payload size
remove_keys kubernetes,stream
# Auto Kubernetes labels
auto_kubernetes_labels on
# Line format
line_format json
# Retry configuration
Retry_Limit 3[OUTPUT]
Name s3
Match *
bucket my-logs-bucket
region us-east-1
total_file_size 100M
upload_timeout 10m
use_put_object Off
# Compression
compression gzip
# Path structure with time formatting
s3_key_format /fluent-bit-logs/%Y/%m/%d/$TAG[0]/%H-%M-%S-$UUID.gz
# IAM role authentication (recommended)
# Or use AWS credentials
# AWS credentials loaded from environment or IAM role
Retry_Limit 3[OUTPUT]
Name kafka
Match *
Brokers kafka-broker-1:9092,kafka-broker-2:9092
Topics logs
# Message format
Format json
# Timestamp key
Timestamp_Key @timestamp
# Retry configuration
Retry_Limit 3
# Queue configuration
rdkafka.queue.buffering.max.messages 100000
rdkafka.request.required.acks 1[OUTPUT]
Name cloudwatch_logs
Match *
region us-east-1
log_group_name /aws/fluent-bit/logs
log_stream_prefix from-fluent-bit-
auto_create_group On
Retry_Limit 3[OUTPUT]
Name opentelemetry
Match *
Host opentelemetry-collector.observability.svc
Port 4318
# Use HTTP protocol for OTLP
logs_uri /v1/logs
# Add resource attributes
add_label cluster my-cluster
add_label environment production
# TLS configuration
tls On
tls.verify Off
# Retry configuration
Retry_Limit 3[OUTPUT]
Name prometheus_remote_write
Match *
Host prometheus.monitoring.svc
Port 9090
Uri /api/v1/write
# Add labels to all metrics
add_label cluster my-cluster
add_label environment production
# TLS configuration
tls On
tls.verify Off
# Retry configuration
Retry_Limit 3
# Compression
compression snappy[OUTPUT]
Name http
Match *
Host logs.example.com
Port 443
URI /api/logs
Format json
# TLS
tls On
tls.verify On
# Authentication
Header Authorization Bearer ${API_TOKEN}
# Compression
Compress gzip
# Retry configuration
Retry_Limit 3[OUTPUT]
Name stdout
Match *
Format json_linesNameMatchRetry_Limitstorage.total_limit_size# Memory buffering (default)
storage.type memory
# Filesystem buffering (for high reliability)
storage.type filesystem
storage.path /var/log/fluent-bit-buffer/
storage.total_limit_size 10G
# Retry configuration
Retry_Limit 5tls On
tls.verify On
tls.ca_file /path/to/ca.crt
tls.crt_file /path/to/client.crt
tls.key_file /path/to/client.keyRetry_Limit${ENV_VAR}storage.total_limit_sizeexamples/parsers.conf# Read the examples/parsers.conf file to see available parsers
Read examples/parsers.confexamples/parsers.confdockerjsoncrisyslog-rfc3164syslog-rfc5424nginxapacheapache_errormongodbmultiline-javamultiline-pythonmultiline-gomultiline-ruby# parsers.conf - Add custom parsers alongside existing ones
[PARSER]
Name custom-app
Format regex
Regex ^(?<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) \[(?<level>\w+)\] (?<message>.*)$
Time_Key timestamp
Time_Format %Y-%m-%d %H:%M:%Sexamples/parsers.confTime_KeyTime_FormatMULTILINE_PARSERfluent-bit.conf # Main configuration file
parsers.conf # Custom parser definitions (optional)examples/examples/examples/parsers.conf# fluent-bit.conf
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
storage.metrics on
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/*fluent-bit*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 50MB
Skip_Long_Lines On
Refresh_Interval 10
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
Labels On
Annotations Off
[FILTER]
Name modify
Match *
Add cluster_name my-cluster
Add environment production
[FILTER]
Name nest
Match *
Operation lift
Nested_under kubernetes
[OUTPUT]
Name es
Match *
Host elasticsearch.logging.svc
Port 9200
Logstash_Format On
Logstash_Prefix k8s
Retry_Limit 3
storage.total_limit_size 5M
tls On
tls.verify OffMem_Buf_Limitstorage.type filesystemstorage.total_limit_sizeFlush 1-5Retry_Limit 3-5HTTP_ServerDBtls Ontls.verify Ontls.verify Offtls.verify Offtls On
tls.verify Off # Internal cluster with self-signed certs${VAR_NAME}Mem_Buf_Limit 50MBstorage.metrics onExclude_PathInvoke the devops-skills:fluentbit-validator skill to validate the config:
1. Syntax validation (section format, key-value pairs)
2. Required field checks
3. Plugin parameter validation
4. Tag consistency checks
5. Parser reference validation
6. Security checks (plaintext passwords)
7. Best practice recommendations
8. Dry-run testing (if fluent-bit binary available)
Follow the devops-skills:fluentbit-validator workflow to identify and fix any issues.[SECTION]Mem_Buf_LimitDBExclude_Pathstorage.total_limit_sizeSteps:
1. Generate the Fluent Bit configuration
2. Invoke devops-skills:fluentbit-validator skill with the config file
3. Review validation results
4. Fix any issues identified
5. Re-validate until all checks pass
6. Provide summary of generated config and validation statuskubernetes-elasticsearchkubernetes-lokikubernetes-cloudwatchkubernetes-opentelemetryapplication-multilinesyslog-forwardfile-tail-s3http-kafkamulti-destinationprometheus-metricslua-filteringstream-processorcustompython3 scripts/generate_config.py --use-case kubernetes-elasticsearch --output fluent-bit.confkubernetes-elasticsearch.confkubernetes-loki.confkubernetes-opentelemetry.confapplication-multiline.confsyslog-forward.conffile-tail-s3.confhttp-input-kafka.confmulti-destination.confprometheus-metrics.conflua-filtering.confstream-processor.confparsers.conffull-production.confcloudwatch.conf