一篇文章教你搞懂日志采集利器 Filebeat
本文使用的Filebeat是7.7.0的版本,文章將從如下幾個方面說明:
curl-L-Ohttps://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.0-linux-x86_64.tar.gz
tar -xzvf filebeat-7.7.0-linux-x86_64.tar.gz
export #導(dǎo)出
run #執(zhí)行(默認(rèn)執(zhí)行)
test #測試配置
keystore #秘鑰存儲
modules #模塊配置管理
setup #設(shè)置初始環(huán)境
output.elasticsearch.password:"${ES_PWD}"
type: log #input類型為log
enable: true #表示是該log類型配置生效
paths: #指定要監(jiān)控的日志,目前按照Go語言的glob函數(shù)處理。沒有對配置目錄做遞歸處理,比如配置的如果是:
- /var/log/* /*.log #則只會去/var/log目錄的所有子目錄中尋找以".log"結(jié)尾的文件,而不會尋找/var/log目錄下以".log"結(jié)尾的文件。
recursive_glob.enabled: #啟用全局遞歸模式,例如/foo/**包括/foo, /foo/*, /foo/*/*
encoding:#指定被監(jiān)控的文件的編碼類型,使用plain和utf-8都是可以處理中文日志的
exclude_lines: ['^DBG'] #不包含匹配正則的行
include_lines: ['^ERR', '^WARN'] #包含匹配正則的行
harvester_buffer_size: 16384 #每個harvester在獲取文件時使用的緩沖區(qū)的字節(jié)大小
max_bytes: 10485760 #單個日志消息可以擁有的最大字節(jié)數(shù)。max_bytes之后的所有字節(jié)都被丟棄而不發(fā)送。默認(rèn)值為10MB (10485760)
exclude_files: ['.gz$'] #用于匹配希望Filebeat忽略的文件的正則表達(dá)式列表
ingore_older: 0 #默認(rèn)為0,表示禁用,可以配置2h,2m等,注意ignore_older必須大于close_inactive的值.表示忽略超過設(shè)置值未更新的
文件或者文件從來沒有被harvester收集
close_* #close_ *配置選項(xiàng)用于在特定標(biāo)準(zhǔn)或時間之后關(guān)閉harvester。 關(guān)閉harvester意味著關(guān)閉文件處理程序。 如果在harvester關(guān)閉
后文件被更新,則在scan_frequency過后,文件將被重新拾取。 但是,如果在harvester關(guān)閉時移動或刪除文件,F(xiàn)ilebeat將無法再次接收文件
,并且harvester未讀取的任何數(shù)據(jù)都將丟失。
close_inactive #啟動選項(xiàng)時,如果在制定時間沒有被讀取,將關(guān)閉文件句柄
讀取的最后一條日志定義為下一次讀取的起始點(diǎn),而不是基于文件的修改時間
如果關(guān)閉的文件發(fā)生變化,一個新的harverster將在scan_frequency運(yùn)行后被啟動
建議至少設(shè)置一個大于讀取日志頻率的值,配置多個prospector來實(shí)現(xiàn)針對不同更新速度的日志文件
使用內(nèi)部時間戳機(jī)制,來反映記錄日志的讀取,每次讀取到最后一行日志時開始倒計時使用2h 5m 來表示
close_rename #當(dāng)選項(xiàng)啟動,如果文件被重命名和移動,filebeat關(guān)閉文件的處理讀取
close_removed #當(dāng)選項(xiàng)啟動,文件被刪除時,filebeat關(guān)閉文件的處理讀取這個選項(xiàng)啟動后,必須啟動clean_removed
close_eof #適合只寫一次日志的文件,然后filebeat關(guān)閉文件的處理讀取
close_timeout #當(dāng)選項(xiàng)啟動時,filebeat會給每個harvester設(shè)置預(yù)定義時間,不管這個文件是否被讀取,達(dá)到設(shè)定時間后,將被關(guān)閉
close_timeout 不能等于ignore_older,會導(dǎo)致文件更新時,不會被讀取如果output一直沒有輸出日志事件,這個timeout是不會被啟動的,
至少要要有一個事件發(fā)送,然后haverter將被關(guān)閉
設(shè)置0 表示不啟動
clean_inactived #從注冊表文件中刪除先前收獲的文件的狀態(tài)
設(shè)置必須大于ignore_older+scan_frequency,以確保在文件仍在收集時沒有刪除任何狀態(tài)
配置選項(xiàng)有助于減小注冊表文件的大小,特別是如果每天都生成大量的新文件
此配置選項(xiàng)也可用于防止在Linux上重用inode的Filebeat問題
clean_removed #啟動選項(xiàng)后,如果文件在磁盤上找不到,將從注冊表中清除filebeat
如果關(guān)閉close removed 必須關(guān)閉clean removed
scan_frequency #prospector檢查指定用于收獲的路徑中的新文件的頻率,默認(rèn)10s
tail_files:#如果設(shè)置為true,F(xiàn)ilebeat從文件尾開始監(jiān)控文件新增內(nèi)容,把新增的每一行文件作為一個事件依次發(fā)送,
而不是從文件開始處重新發(fā)送所有內(nèi)容。
symlinks:#符號鏈接選項(xiàng)允許Filebeat除常規(guī)文件外,可以收集符號鏈接。收集符號鏈接時,即使報告了符號鏈接的路徑,
Filebeat也會打開并讀取原始文件。
backoff: #backoff選項(xiàng)指定Filebeat如何積極地抓取新文件進(jìn)行更新。默認(rèn)1s,backoff選項(xiàng)定義Filebeat在達(dá)到EOF之后
再次檢查文件之間等待的時間。
max_backoff: #在達(dá)到EOF之后再次檢查文件之前Filebeat等待的最長時間
backoff_factor: #指定backoff嘗試等待時間幾次,默認(rèn)是2
harvester_limit:#harvester_limit選項(xiàng)限制一個prospector并行啟動的harvester數(shù)量,直接影響文件打開數(shù)
tags #列表中添加標(biāo)簽,用過過濾,例如:tags: ["json"]
fields #可選字段,選擇額外的字段進(jìn)行輸出可以是標(biāo)量值,元組,字典等嵌套類型
默認(rèn)在sub-dictionary位置
filebeat.inputs:
fields:
app_id: query_engine_12
fields_under_root #如果值為ture,那么fields存儲在輸出文檔的頂級位置
multiline.pattern #必須匹配的regexp模式
multiline.negate #定義上面的模式匹配條件的動作是 否定的,默認(rèn)是false
假如模式匹配條件'^b',默認(rèn)是false模式,表示講按照模式匹配進(jìn)行匹配 將不是以b開頭的日志行進(jìn)行合并
如果是true,表示將不以b開頭的日志行進(jìn)行合并
multiline.match # 指定Filebeat如何將匹配行組合成事件,在之前或者之后,取決于上面所指定的negate
multiline.max_lines #可以組合成一個事件的最大行數(shù),超過將丟棄,默認(rèn)500
multiline.timeout #定義超時時間,如果開始一個新的事件在超時時間內(nèi)沒有發(fā)現(xiàn)匹配,也將發(fā)送日志,默認(rèn)是5s
max_procs #設(shè)置可以同時執(zhí)行的最大CPU數(shù)。默認(rèn)值為系統(tǒng)中可用的邏輯CPU的數(shù)量。
name #為該filebeat指定名字,默認(rèn)為主機(jī)的hostname
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths: #配置多個日志路徑
-/var/logs/es_aaa_index_search_slowlog.log
-/var/logs/es_bbb_index_search_slowlog.log
-/var/logs/es_ccc_index_search_slowlog.log
-/var/logs/es_ddd_index_search_slowlog.log
#- c:programdataelasticsearchlogs*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ Outputs =====================================
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts #配多個logstash使用負(fù)載均衡機(jī)制
hosts: ["192.168.110.130:5044","192.168.110.131:5044","192.168.110.132:5044","192.168.110.133:5044"]
loadbalance: true #使用了負(fù)載均衡
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://192.168.110.130:9200"] #這里可以配置多個
index => "query-%{yyyyMMdd}"
}
}
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
-/var/logs/es_aaa_index_search_slowlog.log
-/var/logs/es_bbb_index_search_slowlog.log
-/var/logs/es_ccc_index_search_slowlog.log
-/var/logs/es_dddd_index_search_slowlog.log
#- c:programdataelasticsearchlogs*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: filebeat222
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#cloud.auth:
#================================ Outputs =====================================
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.110.130:9200","92.168.110.131:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "${ES_PWD}" #通過keystore設(shè)置密碼
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "192.168.110.130:5601" #指定kibana
username: "elastic" #用戶
password: "${ES_PWD}" #密碼,這里使用了keystore,防止明文密碼
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.110.130:9200","192.168.110.131:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic" #es的用戶
password: "${ES_PWD}" # es的密碼
#這里不能指定index,因?yàn)槲覜]有配置模板,會自動生成一個名為filebeat-%{[beat.version]}-%{+yyyy.MM.dd}的索引
cd filebeat-7.7.0-linux-x86_64/modules.d
./filebeat modules elasticsearch
./filebeat modules list
./filebeat setup -e
./filebeat -e