Due to the improvement of the company’s caching solution, codis cluster is prepared as the main caching solution (codis: redis cluster solution developed by domestic pea pods, open source, github address: https://github.com/CodisLabs/codis), codis cluster Depends on the zookeeper cluster, this article introduces the implementation of the zookeeper cluster.
ZooKeeper is an open source distributed application coordination service. It contains a simple primitive set based on which distributed applications can implement synchronization services, configuration maintenance and naming services.
Zookeeper design purpose
How Zookeeper works
1、 In the zookeeper cluster, each node has the following 3 roles and 4 states:
Role: leader, follower, observer
Status: leading,following,observing,looking
The core of Zookeeper is atomic broadcasting. This mechanism ensures synchronization between servers. The protocol that implements this mechanism is called the Zab protocol (ZooKeeper Atomic Broadcast protocol). The Zab protocol has two modes, which are recovery mode (Recovery chooses the master) and broadcast mode (Broadcast synchronization). When the service is started or after the leader crashes, Zab enters the recovery mode. When the leader is elected and most of the servers are synchronized with the leader's state, the recovery mode ends. State synchronization ensures that the leader and server have the same system state.
In order to ensure the consistency of the transaction sequence, zookeeper uses an increasing transaction id number (zxid) to identify the transaction. All proposals add zxid when they are made. In the implementation, zxid is a 64-bit number. Its high 32 bits are used by the epoch to identify whether the leader relationship has changed. Every time a leader is selected, it will have a new epoch, which identifies the current period of the leader's reign. The lower 32 bits are used for up counting.
Each Server has 4 states during its work:
LOOKING: The current server does not know who the leader is and is searching.
LEADING: The current Server is the elected leader.
FOLLOWING: The leader has been elected, and the current server is synchronized with it.
OBSERVING: In most cases, the behavior of observers is exactly the same as that of followers, but they do not participate in elections and voting, but only accept (observing) the results of elections and voting.
Zookeeper cluster nodes
lab environment
Hostname | system | IP address |
---|---|---|
linux-node1 | CentOS release 6.8 | 192.168.1.148 |
linux-node2 | CentOS release 6.8 | 192.168.1.149 |
linux-node2 | CentOS release 6.8 | 192.168.1.150 |
Two, Zookeeper installation
Zookeeper requires a java environment to run, and jdk needs to be installed. Note: Zookeeper and jdk need to be installed on each server. It is recommended to download the required installation package locally and upload it to the server. The download speed on the server is too slow.
2.1、 JDK installation
JDK download address: http://www.oracle.com/technetwork/java/javase/downloads/index.html
rpm -ivh jdk-8u101-linux-x64.rpm
2.2、 Zookeeper installation
Zookeeper link: http://zookeeper.apache.org/
wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz -P /usr/local/src/
tar zxvf zookeeper-3.4.8.tar.gz -C /opt
cd /opt && mv zookeeper-3.4.8 zookeeper
cd zookeeper
cp conf/zoo_sample.cfg conf/zoo.cfg
echo -e "# append zk_env\nexport PATH=$PATH:/opt/zookeeper/bin">>/etc/profile
Three, Zookeeper cluster configuration
Note: When building a zookeeper cluster, you must first stop the zookeeper node that has been started.
3.1、 Zookeeper configuration file modification
egrep -v "^#|^$" zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataLogDir=/opt/zookeeper/logs
dataDir=/opt/zookeeper/data
clientPort=2181
autopurge.snapRetainCount=500
autopurge.purgeInterval=24
server.1=192.168.1.148:2888:3888
server.2=192.168.1.149:2888:3888
server.3=192.168.1.150:2888:3888
mkdir -p /opt/zookeeper/{logs,data}
3.2、 Configuration parameter description
The tickTime is the time interval between the zookeeper server or between the client and the server to maintain the heartbeat, which means that every tickTime time will send a heartbeat.
The initLimit configuration item is used to configure zookeeper to accept clients (the client mentioned here is not the client connecting the user to the zookeeper server, but the follower server connected to the leader in the zookeeper server cluster). The number of heartbeat intervals.
When the time of 10 heartbeats (that is, tickTime) is exceeded, the zookeeper server has not received the return message from the client, then it indicates that the client connection failed. The total length of time is 10*2000=20 seconds.
The configuration item syncLimit identifies the length of the message sent between the leader and the follower, the request and response time, the longest cannot exceed the length of tickTime, the total length of time is 5*2000=10 seconds.
dataDir, as the name implies, is the directory where zookeeper saves data. By default, zookeeper also saves the log files for writing data in this directory;
The clientPort port is the port for the client to connect to the Zookeeper server, and Zookeeper will listen to this port to accept the client's access request;
server.A=B:C:D in A is a number, indicating which server is this number, B is the IP address of this server, and the first port of C is used for information exchange between cluster members, indicating that this server and the cluster The port used by the leader server to exchange information, D is the port used for leader election when the leader hangs up.
3.3、 Create ServerID identification
In addition to modifying the zoo.cfg configuration file, a myid file must be configured in zookeeper cluster mode, which needs to be placed in the dataDir directory.
There is a data in this file that is the value of A (the A is the A in server.A=B:C:D in the zoo.cfg file), and the myid file is created in the dataDir path configured in the zoo.cfg file.
echo "1">/opt/zookeeper/data/myid
echo "2">/opt/zookeeper/data/myid
echo "3">/opt/zookeeper/data/myid
At this point, the relevant configuration has been completed
Four, Zookeeper cluster view
1、 Start the zookeeper node on each server:
/opt/zookeeper/bin/zkServer.sh start
Note: troubleshooting
The Zookeeper node cannot be started. Possible reasons: the zoo.cfg configuration file is incorrect, and iptables is not closed.
2、 Check the status of each node after startup
Five, Zookeeper cluster connection
After the Zookeeper cluster is set up, you can connect to the zookeeper cluster through the client script. For the client, the zookeeper cluster is a whole, and connecting to the zookeeper cluster actually feels like exclusive services of the entire cluster.
From the above figure, we can see that the entire zookeeper cluster has been built and tested.
http://blog.csdn.net/wuliu_forever/article/details/52053557
http://www.cnblogs.com/luxiaoxun/p/4887452.html
Recommended Posts