Contact glusterfs, install the cluster on centos7, and record it.
There are three machines shown in List-1 below, the hostnames are node1/node2/node3, and then the contents of List-1 are written into /etc/hosts
192.168.33.20 node1
192.168.33.21 node2
192.168.33.22 node3
Install glusterfs (all three must be executed)
# Install glusterfs
yum install centos-release-gluster
yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
# Start the gluster service
systemctl start glusterd.service
systemctl enable glusterd.service
# Turn off the firewall
systemctl stop firewalld.service
systemctl disable firewalld.service
Execute the following List-3 on node1 to add node2 and node3 to the cluster
gluster peer probe node2
gluster peer probe node3
Check the cluster status on node1, as shown in List-4
[ root@node1 db]# gluster peer status
Number of Peers:2
Hostname: node2
Uuid: ab8dac2f-e5fb-4752-b70d-b0103a40f8ea
State: Peer inCluster(Connected)
Hostname: node3
Uuid: f13b4732-ae12-4b6c-b4eb-65fd7886588c
State: Peer inCluster(Connected)
At this point, we can directly create a volume to use, but we want to mount the volume to a partition, so additional operations are needed. As shown in List-7, the volume /dev/sdb is my newly-added raw disk-I added it through the UI on virtualbox.
[ root@node1 db]# fdisk -l
Disk /dev/sda:10.5 GB,10485760000 bytes,20480000 sectors
Units = sectors of1*512=512 bytes
Sector size(logical/physical):512 bytes /512 bytes
I/O size(minimum/optimal):512 bytes /512 bytes
Disk label type: dos
Disk identifier:0x000927b6
Device Boot Start End Blocks Id System
/dev/sda1 *2048102604751200083 Linux
/dev/sda2 1026048204799999726976 8e Linux LVM
Disk /dev/sdb:3221 MB,3221225472 bytes,6291456 sectors
Units = sectors of1*512=512 bytes
Sector size(logical/physical):512 bytes /512 bytes
I/O size(minimum/optimal):512 bytes /512 bytes
...
As shown in List-8, since /dev/sdb is 3G, I apply for 2G, and every node must execute
vgcreate vg_gluster /dev/sdb
lvcreate -n lv_gluster -L 2G vg_gluster
# format
mkfs.ext4 /dev/vg_gluster/lv_gluster
Mount /data_gluster to the disk we created, as shown in List-9, which must be executed on each node
echo "/dev/vg_gluster/lv_gluster /data_gluster ext4 defaults 0 0">>/etc/fstab
mount -a
mount -l | grep gluster
Then manually create the directory /data_gluster on node1/node2/node3.
On node1, use the following List-5 command to create a volume, the volume name is db_volume
gluster volume create db_volume \
replica 3 node1:/data_gluster/db node2:/data_gluster/db node3:/data_gluster/db force
Activate the volume, as shown in List-6
gluster start db_volume
At this point, although we have activated the volume, it cannot be used directly. To mount it, as shown in List-10, mount the volume db_volume to the /mnt/gluster/db directory, we can only go to /mnt/gluster Write data in /db, you cannot directly operate /data_gluster. This step is executed on node1. You can use "umount -l /mnt/gluster/db" when you do not want to mount /mnt/gluster/db later
mkdir -p /mnt/gluster/db
mount -t glusterfs node1:/db_volume /mnt/gluster/db
Recommended Posts