elk部署

elk部署

elk介绍

​ elk为elasticsearch、logstash、kibana的合称,三者共同组成了日志收集处理系统,其中:

  • logstash:负责日志的收集,如tomcat、nginx等应用产生的日志;
  • elasticsearch:负责日志数据的存储、检索查询;
  • kibana:是ui框架,从elasticsearch中抽取数据并图形化展示;

因为logstash较为重量,因此日志收集部分还有beats系列,如filebeat、metricbeat等,部署在需要收集日志的每个节点上更为轻量,logstash可选的,做一个中间的中转收集和过滤;因此除elk为,还有efk,emk等,但本质是一样的;统称elastic stack,共同作为一个日志的处理系统;

官方网址:https://www.elastic.co/guide/index.html

示例结构图示:

对于较大数据规模,需要引入redis,kafka这类做缓存中间层,较小规模则不需要;

image-20201211152030167

elasticsearch部署

简介

Elasticsearch is the distributed search and analytics engine at the heart of the Elastic Stack. Logstash and Beats facilitate collecting, aggregating, and enriching your data and storing it in Elasticsearch. Kibana enables you to interactively explore, visualize, and share insights into your data and manage and monitor the stack. Elasticsearch is where the indexing, search, and analysis magic happens.

Elasticsearch provides near real-time search and analytics for all types of data. Whether you have structured or unstructured text, numerical data, or geospatial data, Elasticsearch can efficiently store and index it in a way that supports fast searches. You can go far beyond simple data retrieval and aggregate information to discover trends and patterns in your data. And as your data and query volume grows, the distributed nature of Elasticsearch enables your deployment to grow seamlessly right along with it.

节点规划

共2台机器,组成es集群,ip:

  • 192.168.80.106;node1
  • 192.168.80.107;node2

jdk8安装

[root@node1 ~]# yum install -y java-1.8.0-openjdk-devel
会自动安装openjdk包和其他依赖

[root@node1 ~]# cat /etc/profile.d/java.sh 
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
export PATH=$JAVA_HOME/bin:$PATH

[root@node1 ~]# source /etc/profile.d/java.sh
配置环境变量

elasticsearch安装

其支持多种安装方式,此处采用rpm包安装方式

1、先添加一块数据磁盘

[root@node1 ~]# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   40G  0 disk 
├─sda1   8:1    0    1G  0 part /boot
└─sda2   8:2    0   39G  0 part /
sr0     11:0    1  4.2G  0 rom  
[root@node1 ~]# for i in /sys/class/scsi_host/host*/scan;do echo "- - -" >$i;done
[root@node1 ~]# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   40G  0 disk 
├─sda1   8:1    0    1G  0 part /boot
└─sda2   8:2    0   39G  0 part /
sdb      8:16   0   50G  0 disk 
sr0     11:0    1  4.2G  0 rom  

格式化挂载;
[root@node1 ~]# mkfs.xfs /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node1 ~]# mkdir /data
[root@node1 ~]# mount /dev/sdb /data/
[root@node1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb         50G   33M   50G   1% /data

记得写入/etc/fstab实现开机挂载

2、下载rpm包

采用清华的镜像源,注意elastic stack的所有组件的版本号要完全一致;

https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.8.1/

[root@node1 ~]# ll
total 669164
-rw-------. 1 root root      1391 Aug  8 12:02 anaconda-ks.cfg
-rw-r--r--  1 root root 148535665 Dec 10 17:09 elasticsearch-6.8.1.rpm
-rw-r--r--  1 root root   2916002 Dec 10 17:01 elasticsearch-6.8.1.rpm.1
-rw-r--r--  1 root root 533765727 Oct 20  2017 harbor-offline-installer-v1.2.2.tgz
[root@node1 ~]# rpm -ivh elasticsearch-6.8.1.rpm
warning: elasticsearch-6.8.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
   1:elasticsearch-0:6.8.1-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch

安装后配置

1、修改配置文件,yaml格式

[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml 
[root@node1 ~]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml 
cluster.name: myelk 集群名
node.name: node-1 集群中节点名
path.data: /data/esdata 数据和日志目录
path.logs: /data/eslog
bootstrap.memory_lock: true true为默认占用1g内存
network.host: 0.0.0.0 监听地址
http.port: 9200 监听端口
discovery.zen.ping.unicast.hosts: ["192.168.80.106", "192.168.80.107"] 集群成员的ip

复制到集群中其他节点做对应修改,其他节点,同样安装jdk,配置环境变量;规划到存储es数据的分区;下载es的rpm包并安装;

[root@node2 ~]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml 
cluster.name: myelk
node.name: node-2
path.data: /data/esdata
path.logs: /data/eslog
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.80.106", "192.168.80.107"]

2、改数据目录属主

chown -R elasticsearch.elasticsearch /data/

3、加大内存设置

[root@node1 ~]# vim /usr/lib/systemd//system/elasticsearch.service
LimitMEMLOCK=infinity


[root@node1 ~]# vim /etc/elasticsearch/jvm.options 
-Xms2g
-Xmx2g

先在unit file中设置使用内存无限制,
再在jvm配置文件中,限制最大最小内存就是2g,(节点内存要足够)

4、启动失败的错误

  1. 内存不足

    内存不足,启动总失败

    Dec 10 17:45:35 node2 systemd: Started Elasticsearch.
    Dec 10 17:45:47 node2 systemd: elasticsearch.service: main process exited, code=exited, status=78/n/a
    Dec 10 17:45:47 node2 systemd: Unit elasticsearch.service entered failed state.
    Dec 10 17:45:47 node2 systemd: elasticsearch.service failed.
    
  2. 启动用户不能是root

    [2020-12-10T17:37:53,368][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] uncaught exception in thread [main]
    org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
       
    
  3. jvm参数设置有误

    Invalid initial heap size: -Xms1.5g
    Error: Could not create the Java Virtual Machine.
    Error: A fatal exception has occurred. Program will exit.
       
    需要是整数;
    

验证

成功部署。

[root@node1 ~]# ss -nlt
State      Recv-Q Send-Q                                                 Local Address:Port                                                                Peer Address:Port              
LISTEN     0      128                                                                *:22                                                                             *:*                  
LISTEN     0      100                                                        127.0.0.1:25                                                                             *:*                  
LISTEN     0      128                                                               :::9200                                                                          :::*                  
LISTEN     0      128                                                               :::9300                                                                          :::*                  
LISTEN     0      128                                                               :::22                                                                            :::*                  
LISTEN     0      100                                                              ::1:25                                                                            :::* 

image-20201210173931878

image-20201210174904561

检查集群监控性

[root@node1 ~]# curl http://192.168.80.106:9200/_cluster/health?pretty=true
{
  "cluster_name" : "myelk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

其中green代表集群健康;
yellow代表分片丢失;
red代表主分片丢失;

探测es集群监控的py脚本:

来源

[root@node1 ~]# ./test-es-healty.py 
50
[root@node1 ~]# cat test-es-healty.py 
#!/usr/bin/env python
#coding:utf-8
import smtplib
from email.mime.text import MIMEText
from email.utils import formataddr
import subprocess
body = ""
false="false"
obj = subprocess.Popen(("curl -sXGET http://192.168.80.107:9200/_cluster/health?pretty=true"),shell=True,stdout=subprocess.PIPE)
data=obj.stdout.read()
data1=eval(data)
status=data1.get("status")
if status == "green":
        print("50")
else:
        print("100")
[root@node1 ~]# ./test-es-healty.py 
50

安装es插件head

kibana部署

kibana简介

​ 为es的可视化界面,利用es的接口抽取数据,并以各种图形化的方式展示,具备地理图示、时间序列等特点,方便检索;

节点规划

共2台机器,每台es部署一个kibana,ip:每个kibana配置时,连接自己本地的es地址和端口即可,es集群内部会自动同步数据,多个kibana前还可以加个nginx做负载均衡;

  • 192.168.80.106;node1
  • 192.168.80.107;node2

安装

1、下载和es同版本的rpm包,直接install

清华镜像站es下载地址:https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.8.1/

[root@es1 ~]# rpm -ivh kibana-6.8.1-x86_64.rpm 

2、修改kibana配置

[root@es1 ~]# grep -v '^#' /etc/kibana/kibana.yml |grep -v '^$'
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
i18n.locale: "zh-CN"
分别为监听端口、监听地址、要连接的es地址,这里是本地,语系设置;

3、另一台节点做同样的操作,注意修改相应的配置

[root@es2 ~]# grep -v '^#' /etc/kibana/kibana.yml |grep -v '^$'
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
i18n.locale: "zh-CN"

4、启动,开机启动

[root@es1 ~]# systemctl enable kibana;systemctl start kibana
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
[root@es1 ~]# ss -nlt
State      Recv-Q Send-Q Local Address:Port                Peer Address:Port              
LISTEN     0      128                *:5601                           *:*     

5、界面访问

image-20201211154105377

kibana基础管理

添加数据

1、在80.106节点的kibana界面添加样例数据

​ 此处添加了一个web日志样例数据;

image-20201211155035236

2、在另外一台kibana查看是否同步

​ 107上可以看到刚添加的数据

image-20201211155149815

3、后台查看es数据目录是否生成数据,

​ 同一个集群中2个es是同步数据,起到备份作用;

106:
[root@es1 ~]# ll /data/esdata/nodes/0/indices/
total 0
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 9ckx4q-3ROWMxev8mQHcwA
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:49 G0WRjQk6S6eOSB7I5AlDlQ
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 rj1dL_VMTYSTIeg2Pvx3oA

107:
[root@es2 ~]# ll /data/esdata/nodes/0/indices/
total 0
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 9ckx4q-3ROWMxev8mQHcwA
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:49 G0WRjQk6S6eOSB7I5AlDlQ
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 rj1dL_VMTYSTIeg2Pvx3oA

管理索引

1、kibana界面删除索引

image-20201211155621570

2、es后台数据目录确认

[root@es1 ~]# ll /data/esdata/nodes/0/indices/
total 0
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 9ckx4q-3ROWMxev8mQHcwA
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 rj1dL_VMTYSTIeg2Pvx3oA

[root@es2 ~]# ll /data/esdata/nodes/0/indices/
total 0
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 9ckx4q-3ROWMxev8mQHcwA
drwxr-xr-x 4 elasticsearch elasticsearch 29 Dec 11 15:40 rj1dL_VMTYSTIeg2Pvx3oA

# 2台es同步删除了,该样例数据的数据目录;

logstash部署

简介

Logstash is an open source data collection engine with real-time pipelining capabilities. Logstash can dynamically unify data from disparate sources and normalize the data into destinations of your choice. Cleanse and democratize all your data for diverse advanced downstream analytics and visualization use cases.

While Logstash originally drove innovation in log collection, its capabilities extend well beyond that use case. Any type of event can be enriched and transformed with a broad array of input, filter, and output plugins, with many native codecs further simplifying the ingestion process. Logstash accelerates your insights by harnessing a greater volume and variety of data.

logstash在elk结构中,负责收集、过滤、传输数据,logstash对数据的操作有三大步,logstash对每一条数据处理流水线称为events,各步骤靠插件实现;logstash可以从多类数据源收集数据,但较为重量,因此有了beats系列,按需在需要收集数据的节点上部署需要的beats,由beats收集,交由logstash汇总,数据处理;

  • input
    • 数据收集,从各种数据源采集或接收数据,如file、syslog、redis,beats系列等
  • filter:(可选)
    • 负责数据处理,转换,有grok、drop、clone、geoip等
  • output
    • 数据输出,常输出到的目标有es,file,statsd等

节点规划

一台logstash,ip

  • 192.168.80.108

安装

1、安装jdk环境

[root@node3 ~]# yum install -y java-1.8.0-openjdk-devel

2、配置jdk环境变量

[root@logstash ~]# cat /etc/profile.d/java.sh 
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
export PATH=$JAVA_HOME/bin:$PATH

3、安装logstash

[root@logstash ~]# rpm -ivh logstash-6.8.1.rpm 
[root@logstash ~]# chown -R logstash.logstash /usr/share/logstash/data/

#该数据目录属主

4、配置logstash环境变量

[root@logstash ~]# vim /etc/profile.d/logstash.sh
[root@logstash ~]# source !$
source /etc/profile.d/logstash.sh

测试logstash

测试:标准输入、标准输出

[root@logstash ~]# /usr/share/logstash/bin/logstash -e 'input {  stdin{} } output{ stdout { codec => rubydebug }}' 
hello  
# 输入一个hello

/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
      "@version" => "1",
          "host" => "logstash",
    "@timestamp" => 2020-12-11T08:28:45.073Z,
       "message" => "hello"
}
[INFO ] 2020-12-11 16:28:45.693 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
# 输出结果

测试:标准输入、文件输出

[root@logstash ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { file { path => "/tmp/linux.txt"}}'
shuaiqi

查看输出的文件内容

[root@logstash ~]# cat /tmp/linux.txt 
{"@version":"1","message":"shuaiqi","host":"logstash","@timestamp":"2020-12-11T09:12:36.395Z"}

测试:标准输入、输出到elasticsearch

1、命令行定义输入、输出

[root@logstash ~]# logstash -e 'input { stdin {}} output { elasticsearch {hosts => ["http://192.168.80.107:9200"] index => "linux-test-%{+YYYY.MM.dd}" } }'
hello

2、kibana创建索引,根据linux-test的前缀索引搜索匹配

image-20201211171002387

3、在discover查看

image-20201211171142531

收集系统日志输出到文件

1、编辑配置文件,启动定义了系统日志/var/log/messages作为输入

[root@logstash ~]# cat /etc/logstash/conf.d/linux_mes.conf 
input {
  file {
   path => "/var/log/messages"
   start_position => "beginning"
   stat_interval => 3  # 间隔时间3s.
   type => "messagelog"
  }
 
}
 
output {
  elasticsearch {
    hosts => ["192.168.80.107:9200"]
    index => "messagelog-7-100-%{+YYYY.MM.dd}"
  }
 
}

2、语法检查,检查通过后启动

logstash -f /etc/logstash/conf.d/linux_mes.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-12-11 16:47:55.214 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK


[root@logstash ~]# systemctl start logstash

[root@logstash ~]# logstash -f /etc/logstash/conf.d/linux_mes.conf 
...

3、管理-》kibana-》索引模式-》创建索引模式,以message开头搜索匹配索引,

image-20201211165321179

4、以时间戳做筛选字段

image-20201211165453617

5、成功创建后,discovery页面在:添加筛选备份,过滤message开头的刚定义的index,右侧即为数据

image-20201211165722677

收集多输入源、多输出目标

0、下载nginx,

yum install -y nginx
systemctl start nginx
curl localhost # 生成访问日志

# nginx日志文件注意加所有的读权限,让logstash的进程用户,有权限读取该文件
[root@logstash ~]# chmod a+r /var/log/nginx/access.log

1、创建配置文件

[root@logstash ~]# cat /etc/logstash/conf.d/nginx-and-syslog.conf 
input {
file {
   path => "/var/log/messages"
   start_position => "beginning"
   stat_interval => 3
   type => "messagelog"
  }
 
  file {
   path => "/var/log/nginx/access.log"
   start_position => "beginning"
   stat_interval => 3
   type => "nginx-log"
  }
 
}

# 输入定义了2个,类型为本地的文件;
【读取本地文件时,logstash的启动用户一般是logstash,一定要对输入的文件具有读取权限】


output {
  if [type] == "messagelog" {
    elasticsearch {
    hosts => ["192.168.80.107:9200"]
    index => "messagelog-%{+YYYY.MM.dd}"
  }}
 
  if [type] == "nginx-log" {
    elasticsearch {
    hosts => ["192.168.80.107:9200"]
    index => "nginx%{+YYYY.MM.dd}"
  }}
}
# 输出定义了2个,目标为es,index为kibana创建索引时的匹配前缀
# input定义了2个输入,output定义了2个输出,且根据type类型做了if判断

2、检查语法

[root@logstash ~]# logstash -f /etc/logstash/conf.d/nginx-and-syslog.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-12-11 17:21:17.243 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK

3、启动

[root@logstash ~]# logstash -f /etc/logstash/conf.d/nginx-and-syslog.conf

4、kibana创建索引

image-20201211173515826

5、kibana在discover查看数据

image-20201211173555016

updatedupdated2020-12-142020-12-14
加载评论