beats简介
https://www.elastic.co/guide/en/beats/libbeat/current/beats-reference.html#beats-reference
beats为一系列轻量级的数据采集工具,可以直接汇总数据到es,或经由logstash进一步处理,常用的有filebeat,auditbeat,packetbeat
elk结合filebeat
节点架构
与elk架构相比,efk架构只是将最初的采集端从logstash换成了专用的beats系列,如filebeat,
安装filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.8.1-x86_64.rpm
rpm -ivh filebeat-6.8.1-x86_64.rpm
filebeat 收集日志写入到kafka
修改filebeat配置,为yaml格式,读取本地的系统日志,然后写入到kafka节点;
读取本地的系统日志
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/messages
fields:
host: "192.168.80.105"
type: "filebeat-syslog-80-105"
app: "syslog"
输出到kafka
output.kafka:
hosts: ["192.168.80.106:9092","192.168.80.107:9092"]
topic: "filebeat-syslog-80-105"
partition.round_robin:
reachable_only: true
required_acks: 1 # 本地写入完成
compression: gzip # 开启压缩
max_message_bytes: 1000000 # 消息最大值
[root@logstash-105 ~]# systemctl start filebeat
[root@logstash-105 ~]# systemctl status filebeat
kafka查看是否写入了响应的topic
[root@es2 ~]# /usr/local/kafka/bin/kafka-topics.sh --zookeeper 192.168.80.107:2181 --list
filebeat-syslog-80-105
hello
logstash从kafka读取数据写入es
编辑logstash108的配置文件,从kafka读取数据,写入到es中;
[root@logstash-108 conf.d]# cat from-kafka.conf
input {
kafka {
topics => "filebeat-syslog-80-105"
bootstrap_servers => "192.168.80.107:9092"
codec => "json"
}
}
output {
if [fields][app] == "syslog" {
elasticsearch {
hosts => ["192.168.80.107:9200"]
index => "filebeat-syslog-80-105-%{+YYYY.MM.dd}"
}
}
}
语法测试,启动,启动无报错,但kibana界面查不到该索引
[root@logstash-108 conf.d]# logstash -f from-kafka.conf
[INFO ] 2020-12-14 19:02:37.180 [Ruby-0-Thread-11: :1] ConsumerCoordinator - [Consumer clientId=logstash-0, groupId=logstash] Revoking previously assigned partitions []
[INFO ] 2020-12-14 19:02:37.181 [Ruby-0-Thread-11: :1] AbstractCoordinator - [Consumer clientId=logstash-0, groupId=logstash] (Re-)joining group
[INFO ] 2020-12-14 19:02:37.284 [Ruby-0-Thread-11: :1] AbstractCoordinator - [Consumer clientId=logstash-0, groupId=logstash] Successfully joined group with generation 1
[INFO ] 2020-12-14 19:02:37.287 [Ruby-0-Thread-11: :1] ConsumerCoordinator - [Consumer clientId=logstash-0, groupId=logstash] Setting newly assigned partitions [filebeat-syslog-80-105-0]
[INFO ] 2020-12-14 19:02:37.316 [Ruby-0-Thread-11: :1] Fetcher - [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition filebeat-syslog-80-105-0 to offset 3777.
es中查不到此索引
[root@es2 ~]# curl 'localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open messagelog-2020.12.11 71PHxMv1T56xqEAsEJ95Vg 5 1 2 0 28.2kb 14.1kb
green open tcp-log-2020.12.12 YZ4nkU3WSRyxr_XRmhP7QA 5 1 2 0 25.6kb 12.8kb
green open messagelog-7-100-2020.12.11 Al2uqt1wRe2gHWEQuOkB5w 5 1 419 0 593.8kb 296.8kb
green open .kibana_1 rj1dL_VMTYSTIeg2Pvx3oA 1 1 28 2 242.3kb 124.1kb
green open nginx2020.12.11 g2AHe4WoSEaCNcwY4-p1bg 5 1 5 0 67.8kb 33.9kb
green open nginx-access-log-2020.12.12 BBNARQCuQl62Ad-HlAJC6A 5 1 11 0 116.5kb 58.2kb
green open rsyslog-80-106-2020.12.12 8HLfzyvIQDaBDHflK1fUBw 5 1 6 0 105kb 52.5kb
green open .kibana_task_manager 9ckx4q-3ROWMxev8mQHcwA 1 1 2 0 15kb 7.4kb
green open linux-test-2020.12.11 uzzSUWISQZ6ucwn8TGkDXQ 5 1 1 0 11.9kb 5.9kb
green open tomcat-log-2020.12.12 GAk7ia6iSA-b5cHcRLYa4A 5 1 8 0 100.3kb 50.1kb
green open mes-105-2020.12.13 m0PRk3BdTHit228H-I0tEg 5 1 13050 0 6.2mb 3.1mb
green open logstash-log-2020.12.12 wj_XsXvGR4qP9PZrE7e7cw 5 1 21 0 71.9kb 35.9kb
自然,kibana界面也无法创建索引