搭建zookeeper kafka集群---简书


kafka简介

kafka官网:http://kafka.apache.org/

kafka下载页面:http://kafka.apache.org/downloads

kafka配置快速入门:http://kafka.apache.org/quickstart

新版本的kafka自带有zookeeper,本篇文章记录使用自带zookeeper搭建kafka集群。

环境准备

3台主机:192.168.75.128 192.168.75.130 192.168.75.131

配置准备

1、copy一份zookeeper的配置文件

2、修改zoo.cfg文件

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/data/zookeeper/data

dataLogDir=/data/zookeeper/log

# the port at which the clients will connect

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1

server.1=192.168.75.128:2888:3888 #kafka端口

server.2=192.168.75.130:2888:3888

server.3=192.168.75.131:2888:3888

3、在zookeeper的数据目录创建myid文件

192.168.75.128的值为1

192.168.75.130的值为2

192.168.75.131的值为3

4、进入zookeeper bin目录启动zookeeper

./zkServer.sh start

6、在kafka官网下载kafka,官网地址:http://kafka.apache.org/downloads

7、解压并进入conf下的server.property文件修改配置

broker.id=1 #与zookeeper数据目录下myid文件中的值一致

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from

# java.net.InetAddress.getCanonicalHostName() if not configured.

#  FORMAT:

#    listeners = listener_name://host_name:port

#  EXAMPLE:

#    listeners = PLAINTEXT://your.host.name:9092

listeners=PLAINTEXT://192.168.75.128:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,

# it uses the value for "listeners" if configured.  Otherwise, it will use the value

# returned from java.net.InetAddress.getCanonicalHostName().

advertised.listeners=PLAINTEXT://192.168.75.128:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details

#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network

num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O

num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma separated list of directories under which to store log files

log.dirs=/data/kafka-logs

# The default number of log partitions per topic. More partitions allow greater

# parallelism for consumption, but this will also result in more files across

# the brokers.

num.partitions=1 #分区数

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

# This value is recommended to be increased for installations with data dirs located in RAID array.

num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################

# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"

# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

8、启动kafka

./kafka-server-start.sh -daemon ../config/server.properties #后台运行

kafka测试:

在某台机器创建topic mytest

./kafka-topics.sh -create --zookeeper 192.168.75.128:2181,192.168.75.130:2181,192.168.75.131:2181 -replication-factor 3 --partitions 3 --topic mytest

查看所有topic

./kafka-topics.sh --zookeeper 192.168.75.128:2181 --list

在一台机创建生产者

./kafka-console-producer.sh --broker-list 192.168.75.128:9092 --topic mytest

在另外两台机创建消费者

./kafka-console-consumer.sh --bootstrap-server 192.168.75.130:9092 --topic Test --from-beginning

./kafka-console-consumer.sh --bootstrap-server 192.168.75.131:9092 --topic Test --from-beginning

生产者产生信息,消费者监听消费

生产者:

消费者:




©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容