elk之安装zookeeper和kafka

elk之安装zookeeper和kafka

zookeeper

简介

what is zookeeper

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications.

zk之一个分布式应用协调服务,用于在分布式应用中提供统一的命名管理、消息同步、服务注册与发现等,其本身需要部署为高可用模式,集群节点一般为3、5、7个节点,只有集群中有超过一半的节点可用,则该集群整体可用,类比etcd;

安装

节点信息

  • 80.106:zk1
  • 80.107:zk2

和之前的es集群公用一套机器,注意内存要够;

安装jdk环境

es所在的2个节点106和107已经安装了jdk,若是新节点,直接yum安装openjdk,再配置环境变量JAVA_HOME和PATH即可;

安装zookeeper

  1. 下载zk包,解压,做软链接

    [root@es1 ~]# cd /usr/local/src/
    [root@es1 src]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
       
    [root@es1 src]# tar -xf zookeeper-3.4.14.tar.gz -C /usr/local/
    [root@es1 src]# ln -sv /usr/local/zookeeper-3.4.14/ /usr/local/zookeeper
    ‘/usr/local/zookeeper’ -> ‘/usr/local/zookeeper-3.4.14/’
    
  2. 修改zk配置,配置为2个节点的集群(注意,2节点集群只要损坏一个,该集群就不可用,仅实验而已)

    [root@es1 conf]# cp zoo_sample.cfg zoo.cfg
       
    [root@es1 conf]# vim zoo.cfg 
       
       
    [root@es1 conf]# cat zoo.cfg |grep -v '^#'
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/usr/local/zookeeper/data
    clientPort=2181
    maxClientCnxns=60
    server.1=192.168.80.106:2888:3888
    server.2=192.168.80.107:2888:3888
       
    复制一份示例配置文件,然后修改数据目录
    添加集群内节点信息即可
    
  3. 创建数据目录

    [root@es1 conf]# mkdir /usr/local/zookeeper/data
    
  4. 写入集群中每个节点独有的id号

    [root@es1 conf]# echo 1 > /usr/local/zookeeper/data/myid
    
  5. 启动zk服务,查看zk集群状态

    [root@es1 conf]# /usr/local/zookeeper/bin/zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... already running as process 1585.
       
    [root@es1 conf]# /usr/local/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Error contacting service. It is probably not running.
       
    需要另一个节点也启动后,才正常
    [root@es1 conf]# /usr/local/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
    
  6. zk启动脚本写入rc.local

    [root@es1 conf]# tail -1 /etc/rc.d/rc.local 
    /usr/local/zookeeper/bin/zkServer.sh start
    [root@es1 conf]# chmod +x /etc/rc.d/rc.local
    
  7. 另外一台zk节点操作

    [root@es2 conf]# echo 2 > /usr/local/zookeeper/data/myid
       
    [root@es2 conf]# /usr/local/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: leader
       
    不同部分就是写入不同的集群id,
    2个节点都启动后,查看状态,可以看到一主,一从,部署正常;
    
   
   

# kafka

## 简介

kafka是apache旗下的开源分布式消息队列软件,需要结合zk才可使用,介绍:https://kafka.apache.org/intro

## 术语解释

- broker:每个kafka集群的机器都是一个broker
- topic:数据的分类,一条数据流,一个种类的数据可建立一个topic,为逻辑概念,一个topic的数据可分布于多个broker上
- partition:分区,分区为物理概念,为broker上的目录,目录下存放该分区的数据和其索引文件,一个topic由若干partition组成,这些partition可能分布在不同的broker上
- producer:消息的产生方,产生方生产的数据放入kafka的topic中
- consumer:消息的消费方,消费方从订阅了的topic中消费数据,消费者属于某个消费组,未指定就是默认消费组,,每组的多个成员对同一个topic的同一条数据只能消费一次,这样避免了并发消费数据时,同一条数据重复消费的问题,不同组的成员可同时消费同一个topic的同一个数据

## 安装

### 节点信息

- 80.106
- 80.107

和es集群、zk集群,公用这2台节点;

### 准备jdk

​	略

### 安装kafka

1. 下载kafka包,解压,做软链接

wget https://archive.apache.org/dist/kafka/2.1.0/kafka_2.11-2.1.0.tgz

[root@es1 ~]# tar -xf kafka_2.11-2.1.0.tgz -C /usr/local/

[root@es1 ~]# ln -sv /usr/local/kafka_2.11-2.1.0/ /usr/local/kafka




2. 修改配置文件

[root@es2 kafka]# grep -v "^#" /usr/local/kafka/config/server.properties broker.id=1 listeners=PLAINTEXT://192.168.80.106:9092 zookeeper.connect=192.168.80.106:2181,192.168.80.107:2181 要改的有,brokerid,本地监听地址,zk集群每个id的地址

另一台kafka节点的配置

broker.id=2 listeners=PLAINTEXT://192.168.80.107:9092 zookeeper.connect=192.168.80.106:2181,192.168.80.107:2181




3. 启动测试,(虚拟机内存不足,失败)

[root@es1 kafka]# cat /usr/local/kafka/logs/kafkaServer.out OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.

An error report file with more information is saved as:

/usr/local/kafka_2.11-2.1.0/hs_err_pid1863.log




4. 修改启动内存大小,重新启动

[root@es2 kafka]# export KAFKA_HEAP_OPTS="-Xmx1G -Xms128M"

也可以该虚拟机加内存




5. 前台启动测试正常

[root@es1 ~]# /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties ... [2020-12-14 15:27:18,204] INFO Kafka commitId : 809be928f1ae004e (org.apache.kafka.common.utils.AppInfoParser) [2020-12-14 15:27:18,215] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)




6. 后台启动

[root@es1 ~]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties




### logstash测试kafka和zk



1. logstash配置kafka为输出目标

[root@logstash2 conf.d]# vim kafka-zk-test.conf

input {

       stdin {}

} 定义输入为标准输入

output {

       kafka {

               topic_id => "hello"
               bootstrap_servers => "192.168.80.107:9092"
               batch_size =>5
       }
       stdout {
               codec => rubydebug
       }

}

[root@logstash2 conf.d]# logstash -f kafka-zk-test.conf -t

[root@logstash2 conf.d]# logstash -f kafka-zk-test.conf 启动 并在控制台输入nihao,




2. kafka查看topic

[root@es1 bin]# ./kafka-topics.sh --zookeeper 192.168.80.106:2181 --list

在logstash启动后,就可以看到新增了一个topic [root@es1 bin]# ./kafka-topics.sh --zookeeper 192.168.80.106:2181 --list hello [root@es1 bin]# ./kafka-topics.sh --zookeeper 192.168.80.106:2181 --topic hello --describe Topic:hello PartitionCount:1 ReplicationFactor:1 Configs: Topic: hello Partition: 0 Leader: 1 Replicas: 1 Isr: 1




updatedupdated2020-12-142020-12-14
加载评论