Whether you want to provide Ceph Object Storage and/or Ceph Block Device for Cloud Platform, or you want to deploy a Ceph File System or use Ceph as his Use, all [Ceph storage cluster] (http://docs.ceph.org.cn/glossary/#term-21) deployment begins with the deployment of one [Ceph node] (http://docs.ceph.org.cn/glossary/#term-13), network and Ceph storage cluster. A Ceph storage cluster requires at least one Ceph Monitor and two OSD daemons. When running the Ceph file system client, you must have a metadata server (Metadata Server).
Ceph saves client data as objects in the storage pool. By using the CRUSH algorithm, Ceph can calculate which placement group (PG) should hold the specified object (Object), and then further calculate which OSD daemon holds the placement group. The CRUSH algorithm enables Ceph storage clusters to dynamically scale, rebalance, and repair.
Ceph eliminates the dependence on a single central node of the system, thereby realizing the design idea of truly no central node results. This design idea cannot be compared with other distributed storage systems.
The reason why Ceph is so popular is closely related to OpenStack; OpenStack is currently a more popular open source cloud management platform; the integration of OpenStack and Ceph has become the standard configuration of open source cloud platforms.
Ceph Chinese documents, please refer to the link:
http://docs.ceph.org.cn/start/intro/
High availability refers to the function that the system can still provide normal services after a certain component of the system fails. Equipment components and data redundancy can be used to improve reliability. In a Ceph cluster, data redundancy can be provided in two ways: multiple copies of data and erasure coding.
Linear scalability refers to the flexibility to deal with cluster expansion. This can refer to two aspects:
1、 Refers to the storage capacity of the cluster can be scaled, storage nodes and storage devices can be added or deleted at will;
2、 Means that the performance of the system increases linearly with the expansion of the cluster.
Ceph can provide object storage, block device storage, and file system storage. Its object storage can be connected to network disk application services; its block device can be connected to IaaS cloud platform software. IaaS cloud platform software includes: OpenStack, CloudStack, Zstack, etc.; its file system storage is not yet mature, and it is officially not recommended to use it in a production environment.
Ceph is under continuous development and rapid development. 9 versions of debian series have been released. They are firefly, giant, hammer, infernalis, jewelry, kraken, luminous, mimic, and testing.
Visit the download site:
You can go to the corresponding version and download the software package you need.
The jewel version used in this article is also called the candidate version!
The environment and specific node configuration used in this article are as follows:
Operating System | Host Name | IP Address | Function |
---|---|---|---|
ubuntu-16.04.5-server-amd64 | jqb-node128 | 192.168.91.128 | MDS,MON,OSDS |
ubuntu-16.04.5-server-amd64 | jqb-node129 | 192.168.91.129 | MON,OSDS |
ubuntu-16.04.5-server-amd64 | jqb-node131 | 192.168.91.131 | MON,OSDS |
Due to official requirements: at least one Ceph Monitor and two OSD daemons are required.
So 3 servers were used to install the ceph cluster.
Note that there is only one mds
**Note: 3 Ubuntu systems, need to start root account login. **
The following operations are performed as the root user!
**This is very important. It is here that determines the success or failure of the ceph construction. Be sure to pay attention! ! ! **
I stepped on a lot of pits, and if I said too much, it was tears. Let's talk about specific operations!
Log in to 3 servers and check the host name
cat /etc/hostname
**If the output is not the hostname in the table above, please make sure to modify it! **
**After the modification is completed, the server must be restarted! **
Since private DNS is not used here, the hosts file is used directly to force the resolution.
Make sure that the hosts file of the 3 servers has the following 3 records
192.168.91.128 jqb-node128
192.168.91.129 jqb-node129
192.168.91.131 jqb-node131
Since the default update source is too slow, adjust to the update source of Alibaba Cloud
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu xenial-security main restricted
deb http://mirrors.aliyun.com/ubuntu xenial-security universe
deb http://mirrors.aliyun.com/ubuntu xenial-security multiverse
Add release key
wget -q -O-'https://download.ceph.com/keys/release.asc'| sudo apt-key add -
Add Ceph package source, replace {ceph-stable-release} with Ceph stable version (such as cuttlefish, dumpling, emperor, firefly, etc.)
deb http://download.ceph.com/debian-{ceph-stable-release}/$(lsb_release -sc) main
ceph-stable-release I chose the jewel version
Then execute the command on the ubuntu server
echo "deb http://download.ceph.com/debian-jewel/ $(lsb_release -sc) main"
You can get the result:
deb http://download.ceph.com/debian-jewel/ xenial main
Write this information into /etc/apt/sources.list, then the complete content of the file sources.list is as follows:
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu xenial-security main restricted
deb http://mirrors.aliyun.com/ubuntu xenial-security universe
deb http://mirrors.aliyun.com/ubuntu xenial-security multiverse
deb http://download.ceph.com/debian-jewel/ xenial main
Please ensure that the /etc/apt/sources.list of the 3 servers are consistent
Update the system update source, 3 servers execute the following commands
apt-get clean
apt-get update
Be sure to ensure that the time zones of the 3 servers are the same, to force the time zone to change to Shanghai, execute the following command
ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
bash -c "echo 'Asia/Shanghai' > /etc/timezone"
Install ntpdate
apt-get install -y ntpdate
Use Alibaba Cloud's time server update
ntpdate ntp1.aliyun.com
3 Perform it on both servers to ensure that the time is consistent!
Ceph provides RESTFul HTTP API interface to support object storage capabilities through radosgw. Radosgw is essentially a client program that provides FastCGI services
apt-get install -y radosgw --allow-unauthenticated
Configure apt to allow HTTPS to pull mirror installation
apt-get install -y apt-transport-https --allow-unauthenticated
Install ceph related packages
apt-get install -y ceph-base ceph-common ceph-fs-common ceph-fuse ceph-mds ceph-mon ceph-osd --allow-unauthenticated
The above are all 3 servers, all of which need to be executed!
The following operations are all performed on jqb-node128, log in to the jqb-node128 server
Install ceph-deploy
apt-get install -y ceph-deploy --allow-unauthenticated
Modify https to http
sed -i -e "28s/https/http/g"/usr/lib/python2.7/dist-packages/ceph_deploy/hosts/debian/install.py
Generate secret key and write to authorized_keys
ssh-keygen -t rsa -P ""-f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
User authentication between nodes uses ssh. Be sure to set up SSH password-free login!
Execute the following 3 commands
ssh-copy-id jqb-node128
ssh-copy-id jqb-node129
ssh-copy-id jqb-node131
It will write a file on the remote host ~/.ssh/authorized_keys
ssh jqb-node128;exit;
ssh jqb-node129;exit;
ssh jqb-node131;exit;
Please ensure the above 3 commands, no password required
Create a configuration file directory
cd /opt
mkdir -p cephinstall
cd cephinstall
Define a few constants first
MDS="jqb-node128"
MON="jqb-node128 jqb-node129 jqb-node131"
OSDS="jqb-node128 jqb-node129 jqb-node131"
INST="$OSDS $MON"
INST indicates the node to be installed
Use the ceph-deploy command to create a new cluster by defining monitoring nodes.
ceph-deploy new $MON
After execution, it will generate the file ceph.conf in the current directory
Modify the configuration file ceph.conf
echo "osd pool default size =2
osd max object name len =256
osd max object namespace len =64
mon_pg_warn_max_per_osd =2000
mon clock drift allowed =30
mon clock drift warn backoff =30
rbd cache writethrough until flush =false" >> ceph.conf
Install Ceph to all nodes
ceph-deploy install $INST
Deploy monitoring nodes
ceph-deploy mon create-initial
Observe the output carefully and make sure that there is no error!
Here will /data/ceph/osd directory, used to store data
for i in $OSDS;do
echo $i
ssh $i 'mkdir -p /data/ceph/osd'
ssh $i 'ln -snf /data/ceph/osd /var/lib/ceph/osd'
ceph-deploy osd prepare $i:/data/ceph/osd
# success indicator: Host xxx is now ready for osd use
done
for i in $OSDS;do
echo $i
ssh $i 'chown -R ceph:ceph /var/lib/ceph/'
ssh $i 'chown -R ceph:ceph /data/ceph/'
ceph-deploy osd activate $i:/data/ceph/osd
# fix problem "socket /com/ubuntu/upstart: Connection refused"
ssh $i 'dpkg-divert --local --rename --add /sbin/initctl'
ssh $i 'ln -snf /bin/true /sbin/initctl'
ssh $i 'rm -f /etc/apt/sources.list.d/ceph.list'
ssh $i 'modprobe ceph'
done
ceph-deploy admin $MDS
View node information
ssh $MDS "ceph osd tree"
View verification key
ssh $MDS "ceph auth get-key client.admin | base64"
ssh $MDS "mkdir -p /var/lib/ceph/mds/ceph-$MDS"
ssh $MDS "chown -R ceph:ceph /var/lib/ceph"
Usually, you need to override the default pg_num
before creating a pool. The official recommendation is:
ssh $MDS ceph osd pool create fs_db_data 512 ssh $MDS ceph osd pool create fs_db_metadata 512
View pool
ceph osd lspools
ssh $MDS ceph fs newcephfs fs_db_metadata fs_db_data
View Cephfs
ssh $MDS ceph fs ls
ceph-deploy --overwrite-conf mds create $MDS
# Verification key required for mounting
MOUNTKEY=`ssh $MDS "ceph auth get-key client.admin"`
# Node ip
MONIP=`ssh $MDS cat /etc/ceph/ceph.conf |grep mon_host|cut -d "=" -f2|sed 's?,?:6789,?g'`
# Mount directory
mkdir /mycephfs
# Start mounting
mount -t ceph $MONIP:6789://mycephfs -o name=admin,secret=$MOUNTKEY
Check the mount
root@jqb-node128:~/cephinstall# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 971M 0 971M 0%/dev
tmpfs tmpfs 199M 5.8M 193M 3%/run
/dev/sda1 ext4 19G 7.5G 11G 43% /
tmpfs tmpfs 992M 0 992M 0%/dev/shm
tmpfs tmpfs 5.0M 05.0M 0%/run/lock
tmpfs tmpfs 992M 0 992M 0%/sys/fs/cgroup
tmpfs tmpfs 199M 0 199M 0%/run/user/0192.168.91.128:6789,192.168.91.129:6789,192.168.91.131:6789:/ ceph 56G 26G 31G 46%/mycephfs
Note: The mount command is very long, it needs the ip and port numbers of all nodes
If you need to add files to ceph. Just enter the /mycephfs directory, manipulate files like linux, and you're done!
# /bin/bash
# Prerequisite environment
# The system must be ubuntu-16.04.5-server-amd64, at least 3 nodes
# Make sure that the hostname of each host is consistent with the following variable settings
# Ensure that the hosts can access each other with domain names
# Ensure that root login is enabled on each host
# Be sure to use the root user to run this script
# Please refer to the link: https://www.cnblogs.com/xiao987334176/articles/9909039.html
set-e
## Note that there is only one mds and admin, please set the value of MDS
################################################################
MDS="jqb-node128"
MON="jqb-node128 jqb-node129 jqb-node131"
OSDS="jqb-node128 jqb-node129 jqb-node131"
INST="$OSDS $MON"
################################################################
echo "Set ubuntu update source">>/opt/ceph_install.log
# Set ubuntu update source
echo "deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu xenial-security main restricted
deb http://mirrors.aliyun.com/ubuntu xenial-security universe
deb http://mirrors.aliyun.com/ubuntu xenial-security multiverse
deb http://download.ceph.com/debian-jewel/ xenial main" >/etc/apt/sources.list
echo "Set time zone and install software">>/opt/ceph_install.log
# Set time zone and install software
ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
bash -c "echo 'Asia/Shanghai' > /etc/timezone"
apt-get clean
apt-get update
apt-get install -y ceph-deploy --allow-unauthenticated
sed -i -e "28s/https/http/g"/usr/lib/python2.7/dist-packages/ceph_deploy/hosts/debian/install.py
echo "Generate ssh key">>/opt/ceph_install.log
# Generate ssh key
if[!-f ~/.ssh/id_rsa ];then
ssh-keygen -t rsa -P ""-f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
fi
echo "SSH password-free login">>/opt/ceph_install.log
# SSH password-free login
for i in $INST;do
echo $i
ssh-copy-id $i
done
echo "Remote node environment setting and software installation">>/opt/ceph_install.log
# Remote node environment setting and software installation
for i in $INST;do
# Update source
ssh $i "echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu xenial-security main restricted
deb http://mirrors.aliyun.com/ubuntu xenial-security universe
deb http://mirrors.aliyun.com/ubuntu xenial-security multiverse
deb http://download.ceph.com/debian-jewel/ xenial main' >/etc/apt/sources.list"
# Time zone setting
ssh $i ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ssh $i "echo 'Asia/Shanghai' > /etc/timezone"
ssh $i apt-get update
ssh $i apt-get install -y ntpdate
ssh $i /etc/init.d/ntp stop
ssh $i ntpdate ntp1.aliyun.com #Update time
# ceph related software installation
ssh $i apt-get install -y apt-transport-https --allow-unauthenticated
ssh $i apt-get install -y radosgw --allow-unauthenticated
ssh $i apt-get install -y "ceph-base ceph-common ceph-fs-common ceph-fuse ceph-mds ceph-mon ceph-osd"--allow-unauthenticated
done
echo "Create management directory">>/opt/ceph_install.log
# Create management directory
mkdir -p /opt/cephinstall
cd /opt/cephinstall
echo "Over-defined monitoring node,Create a new cluster">>/opt/ceph_install.log
# Over-defined monitoring node,Create a new cluster
ceph-deploy new $MON
echo "Modify the configuration file">>/opt/ceph_install.log
# Modify the configuration file
echo "osd pool default size =2
osd max object name len =256
osd max object namespace len =64
mon_pg_warn_max_per_osd =2000
mon clock drift allowed =30
mon clock drift warn backoff =30
rbd cache writethrough until flush =false" >> ceph.conf
echo "Install Ceph to all nodes">>/opt/ceph_install.log
# Install Ceph to all nodes
ceph-deploy install $INST
echo "Deploy monitoring nodes">>/opt/ceph_install.log
# Deploy monitoring nodes
ceph-deploy mon create-initial
echo "Add OSD to the cluster">>/opt/ceph_install.log
# Add OSD to the cluster
# Here will/data/ceph/osd directory,Used to store data
for i in $OSDS;do
echo $i
ssh $i 'mkdir -p /data/ceph/osd'
ssh $i 'ln -snf /data/ceph/osd /var/lib/ceph/osd'
ceph-deploy osd prepare $i:/data/ceph/osd
# success indicator: Host xxx is now ready for osd use
done
echo "Activate OSD node">>/opt/ceph_install.log
# Activate OSD node
for i in $OSDS;do
echo $i
ssh $i 'chown -R ceph:ceph /var/lib/ceph/'
ssh $i 'chown -R ceph:ceph /data/ceph/'
ceph-deploy osd activate $i:/data/ceph/osd
# fix problem "socket /com/ubuntu/upstart: Connection refused"
ssh $i 'dpkg-divert --local --rename --add /sbin/initctl'
ssh $i 'ln -snf /bin/true /sbin/initctl'
ssh $i 'rm -f /etc/apt/sources.list.d/ceph.list'
ssh $i 'modprobe ceph'
done
echo "Deploy the management key to all associated nodes">>/opt/ceph_install.log
# Deploy the management key to all associated nodes
ceph-deploy admin $MDS
# View node information
ssh $MDS "ceph osd tree"
echo "View verification key">>/opt/ceph_install.log
# View verification key
ssh $MDS "ceph auth get-key client.admin | base64"
ssh $MDS "mkdir -p /var/lib/ceph/mds/ceph-$MDS"
ssh $MDS "chown -R ceph:ceph /var/lib/ceph"
echo "Create POOL">>/opt/ceph_install.log
# Create POOL
ssh $MDS ceph osd pool create fs_db_data 512
ssh $MDS ceph osd pool create fs_db_metadata 512
# View pool
ssh $MDS ceph osd lspools
echo "Create Cephfs">>/opt/ceph_install.log
# Create Cephfs
ssh $MDS ceph fs newcephfs fs_db_metadata fs_db_data
# View pool
ssh $MDS ceph fs ls
echo "Deploy MDS">>/opt/ceph_install.log
# Deploy MDS
ceph-deploy --overwrite-conf mds create $MDS
echo "Verification key required for mounting">>/opt/ceph_install.log
# Verification key required for mounting
MOUNTKEY=`ssh $MDS "ceph auth get-key client.admin"`
# Node ip
MONIP=`ssh $MDS cat /etc/ceph/ceph.conf |grep mon_host|cut -d "=" -f2|sed 's?,?:6789,?g'`
# Mount directory
mkdir /mycephfs
echo "Start mounting">>/opt/ceph_install.log
# Start mounting
mount -t ceph $MONIP:6789://mycephfs -o name=admin,secret=$MOUNTKEY
# View disk mount
df -hT
echo "Mount complete">>/opt/ceph_install.log
**Note: Before execution, please carefully check whether the front environment complies with **
After execution, check the log file /opt/ceph_install.log output:
Set ubuntu update source
Set time zone and install software
Generate ssh key
SSH password-free login
Remote node environment setting and software installation
Set ubuntu update source
Set time zone and install software
Generate ssh key
SSH password-free login
Remote node environment setting and software installation
Create management directory
Over-defined monitoring node,Create a new cluster
Modify the configuration file
Install Ceph to all nodes
Deploy monitoring nodes
Add OSD to the cluster
Activate OSD node
Deploy the management key to all associated nodes
View verification key
Create POOL
Create Cephfs
Deploy MDS
Verification key required for mounting
Start mounting
Mount complete
If you are interested, you can take a look at the following articles!
How to install Ceph storage cluster in Ubuntu 16.04:
https://linux.cn/article-8182-1.html
Ceph basic knowledge and infrastructure understanding:
https://www.cnblogs.com/luohaixian/p/8087591.html
The most complete introduction, principle, and architecture of Ceph in history:
https://blog.csdn.net/uxiAD7442KMy1X86DtM3/article/details/81059215