Copyright statement: This article is an original article by Shaon Puppet. Please indicate the original address for reprinting. Thank you very much. https://blog.csdn.net/wh211212/article/details/79412081
Gluster is a large-scale file system. It is a combination of various storage servers. These servers are integrated with each other by Ethernet or unlimited bandwidth technology Infiniband and remote direct memory access RDMA to form a large parallel file system network. It has multiple applications including cloud computing, such as: biomedical sciences, document storage. Gluster is free software hosted by GNU, and the certificate is AGPL. Gluster is Gluster's primary commercial sponsor and provides commercial products and Gluster-based solutions.
Gluster is a Client/Server architecture. Servers are typically arranged on storage bricks. Each server runs a daemon called glusterfsd and outputs the local file system as a volume. The client process of Gluster connects to the server via a client protocol such as TCP/IP, InfiniBand or SDP, and forms a large so-called folding translator into the remote volume. The final volume is loaded to the client machine through a user space file mechanism called FUSE. I/O with a large number of file applications can also use the libglusterfs client library to directly connect to the server and run the translator internally without going through the file system and FUSE. Most GlusterFS functions are implemented as translators, including:
Refer to GlusterFS official document: http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
# server1, server2 execution
lvcreate -n glusterfs -L 50G centos
mkfs.xfs -i size=512/dev/mapper/centos-glusterfs
mkdir -p /data/brick1
echo '/dev/mapper/centos-glusterfs /data/brick1 xfs defaults 1 2'>>/etc/fstab
mount -a && mount
yum install glusterfs-server -y
[ root@ovirt ~]# systemctl start glusterd
[ root@ovirt ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded:loaded(/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
Active:active(running) since Thu 2018-03-0111:50:37 CST; 6s ago
Process:28808 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS(code=exited, status=0/SUCCESS)
Main PID:28809(glusterd)
CGroup:/system.slice/glusterd.service
└─28809/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Mar 0111:50:37 ovirt.aniu.so systemd[1]: Starting GlusterFS, a clustered file-system server...
Mar 0111:50:37 ovirt.aniu.so systemd[1]: Started GlusterFS, a clustered file-system server.
# iptables
iptables -I INPUT -p all -s <ip-address>-j ACCEPT
# firewalld
firewall-cmd --add-service=glusterfs --permanent && firewall-cmd --reload
gluster peer probe server2
# server1
# gluster peer status
Number of Peers:1
Hostname: server2
Uuid: 7529b9d2-f0c5-4702-9417-8d4cf6ca3247
State: Peer inCluster(Connected)
# server2
# gluster peer status
Number of Peers:1
Hostname: server1
Uuid: 7dcde0ed-f2fc-4940-a193-d69d02f356a5
State: Peer inCluster(Connected)
mkdir -p /data/brick1/gv0
chown vdsm:kvm /data/brick1 -R #For ovirt mount use
# Execute on server1
[ root@ovirt ~]# gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?(y/n) y
volume create: gv0: success: please start the volume to access data
[ root@ovirt ~]# gluster volume start gv0
volume start: gv0: success
[ root@ovirt ~]# gluster volume info #Every node can execute
Volume Name: gv0
Type: Replicate
Volume ID: caab8c47-3617-4d13-900a-5d6ca300e034
Status: Started
Snapshot Count:0
Number of Bricks:1 x 2=2
Transport-type: tcp
Bricks:
Brick1: server1:/data/brick1/gv0
Brick2: server2:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
# Install glusterfs client software
yum -y install glusterfs glusterfs-fuse
# Mount
mount -t glusterfs server1:/gv0 /mnt
for i in`seq -w 1 100`;do cp -rp /var/log/messages /mnt/copy-test-$i; done
#
ls -lA /mnt/copy*| wc -l
You should see 100 files returned. Next, check the GlusterFS brick mount point on each server:
# server1,Execute separately on server2
ls -lA /data/brick1/gv0/copy*| wc -l
Use the methods we listed here to see 100 files on each server. If there is no copy, in the distribution volume (not detailed here), you should see about 50 files on each volume.
gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
Creation of test-volume has been successful
Please start the volume to access data
#
# gluster volume info
Volume Name: test-volume
Type: Distribute
Status: Created
Number of Bricks:4
Transport-type: tcp
Bricks:
Brick1: server1:/exp1
Brick2: server2:/exp2
Brick3: server3:/exp3
Brick4: server4:/exp4
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
Please start the volume to access data
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
Creation of test-volume has been successful
Please start the volume to access data
# gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
Please start the volume to access data
# gluster volume create test-volume stripe 4 transport tcp
server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
Creation of test-volume has been successful
Please start the volume to access data.
For more information about glusterfs, please refer to: http://docs.gluster.org/en/latest/Administrator%20Guide/overview/
Recommended Posts