Elastic stack, commonly known as ELK stack, is a set of open source products including Elasticsearch, Logstash and Kibana. Elastic Stack is developed and maintained by Elastic company. Using Elastic stack, you can send system logs to Logstash, which is a data collection engine that accepts logs or data from any source, normalizes the logs, and then forwards the logs to Elasticsearch for analysis, indexing, and Search and store, and finally use Kibana to represent visual data. Using Kibana, we can also create interactive charts based on user queries.
In this article, we will demonstrate how to set up a multi-node elastic stack cluster on a RHEL 8 / CentOS 8 server. Here are the details of my Elastic Stack cluster:
Elasticsearch:
elasticsearch1.linuxtechi.local
), 192.168.56.50 (elasticsearch2.linuxtechi.local
), 192.168.56.60 (elasticsearch3.linuxtechi.local`)Logstash:**
logstash1.linuxtechi.local
), 192.168.56.30 (logstash2.linuxtechi.local
)Kibana:
One server, minimal installation of RHEL 8 / CentOS 8IP & hostname – 192.168.56.10 (kibana.linuxtechi.local
)
Filebeat:
web-server
)Let's start by setting up an Elasticsearch cluster,
Set up a 3 node Elasticsearch cluster
As I have said, set up the nodes of the Elasticsearch cluster, log in to each node, set the hostname and configure the yum/dnf library
Use the command hostnamectl
to set the hostname on each node:
[ root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local"[root@linuxtechi ~]# exec bash
[ root@linuxtechi ~]#
[ root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local"[root@linuxtechi ~]# exec bash
[ root@linuxtechi ~]#
[ root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local"[root@linuxtechi ~]# exec bash
[ root@linuxtechi ~]#
For CentOS 8 system, we do not need to configure any operating system package library. For RHEL 8 server, if you have a valid subscription, then use Red Hat subscription to get the package repository. If you want to configure a local yum/dnf repository for operating system packages, please refer to the following URL:
How to set up a local Yum / DNF repository on RHEL 8 server using DVD or ISO file
Configure the Elasticsearch package repository on all nodes and create an elastic.repo
file with the following content under the /etc/yum.repo.d/
folder:
~]# vi /etc/yum.repos.d/elastic.repo
[ elasticsearch-7.x]
name=Elasticsearch repository for7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Save the file and exit.
Use the rpm
command to import the Elastic public signing key on all three nodes.
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Add the following lines to the /etc/hosts
files of all three nodes:
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
Use the yum
/dnf
command to install Java on all three nodes:
[ root@linuxtechi ~]# dnf install java-openjdk -y
[ root@linuxtechi ~]# dnf install java-openjdk -y
[ root@linuxtechi ~]# dnf install java-openjdk -y
Use the yum
/dnf
command to install Elasticsearch on all three nodes:
root@linuxtechi ~]# dnf install elasticsearch -y
[ root@linuxtechi ~]# dnf install elasticsearch -y
[ root@linuxtechi ~]# dnf install elasticsearch -y
Note: If the operating system firewall is enabled and running in each Elasticsearch node, use the firewall-cmd
command to allow the following ports to be open:
~]# firewall-cmd --permanent --add-port=9300/tcp
~]# firewall-cmd --permanent --add-port=9200/tcp
~]# firewall-cmd --reload
To configure Elasticsearch, edit the file /etc/elasticsearch/elasticsearch.yml
on all nodes and add the following content:
~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: opn-cluster
node.name: elasticsearch1.linuxtechi.local
network.host:192.168.56.40
http.port:9200
discovery.seed_hosts:["elasticsearch1.linuxtechi.local","elasticsearch2.linuxtechi.local","elasticsearch3.linuxtechi.local"]
cluster.initial_master_nodes:["elasticsearch1.linuxtechi.local","elasticsearch2.linuxtechi.local","elasticsearch3.linuxtechi.local"]
Note: On each node, fill in the correct host name in node.name
, fill in the correct IP address in network.host
, and other parameters remain unchanged.
Now use the systemctl
command to start and enable the Elasticsearch service on all three nodes:
~]# systemctl daemon-reload
~]# systemctl enable elasticsearch.service
~]# systemctl start elasticsearch.service
Use the following ss
command to verify whether the elasticsearch node starts listening on port 9200:
[ root@linuxtechi ~]# ss -tunlp | grep 9200
tcp LISTEN 0128[::ffff:192.168.56.40]:9200*:* users:(("java",pid=2734,fd=256))[root@linuxtechi ~]#
Use the following curl
command to verify the Elasticsearch cluster status:
[ root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty
The output of the command is as follows:
The above output indicates that we have successfully created a 3-node Elasticsearch cluster, and the status of the cluster is also green.
Note: If you want to modify the JVM heap size, then you can edit the file /etc/elasticsearch/jvm.options
and change the following parameters according to your environment
Now let's move to the Logstash node.
**Install and configure Logstash **
Perform the following steps on both Logstash nodes.
Log in to the two nodes and use the hostnamectl
command to set the hostname:
[ root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local"[root@linuxtechi ~]# exec bash
[ root@linuxtechi ~]#
[ root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local"[root@linuxtechi ~]# exec bash
[ root@linuxtechi ~]#
Add the following entries in the /etc/hosts
file of the two logstash nodes:
~]# vi /etc/hosts
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
Save the file and exit.
Configure the Logstash repository on both nodes, create a file logstash.repo
under the folder /ete/yum.repo.d/
with the following content:
~]# vi /etc/yum.repos.d/logstash.repo
[ elasticsearch-7.x]
name=Elasticsearch repository for7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Save and exit the file, run the rpm
command to import the signature key:
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Use the yum
/dnf
command to install Java OpenJDK on both nodes:
~]# dnf install java-openjdk -y
Run the yum
/dnf
command from both nodes to install logstash:
[ root@linuxtechi ~]# dnf install logstash -y
[ root@linuxtechi ~]# dnf install logstash -y
Now configure logstash. Perform the following steps on the two logstash nodes to create a logstash configuration file. First, we copy the logstash sample file under /etc/logstash/conf.d/
:
# cd /etc/logstash/
# cp logstash-sample.conf conf.d/logstash.conf
Edit the configuration file and update the following:
# vi conf.d/logstash.conf
input {
beats {
port =>5044}}
output {
elasticsearch {
hosts =>["http://elasticsearch1.linuxtechi.local:9200","http://elasticsearch2.linuxtechi.local:9200","http://elasticsearch3.linuxtechi.local:9200"]
index =>"%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
# user =>"elastic"
# password =>"changeme"}}
Under the output
section, specify the FQDN of all three Elasticsearch nodes in the hosts
parameter, and the other parameters remain unchanged.
Use the firewall-cmd
command to allow logstash port "5044" in the operating system firewall:
~ # firewall-cmd --permanent --add-port=5044/tcp
~ # firewall-cmd –reload
Now, run the following systemctl
command on each node to start and enable the Logstash service:
~]# systemctl start logstash
~]# systemctl eanble logstash
Use the ss
command to verify whether the logstash service starts listening on port 5044:
[ root@linuxtechi ~]# ss -tunlp | grep 5044
tcp LISTEN 0128*:5044*:* users:(("java",pid=2416,fd=96))[root@linuxtechi ~]#
The above output indicates that logstash has been successfully installed and configured. Let's move on to Kibana installation.
**Install and configure Kibana **
Log in to the Kibana node and use the hostnamectl
command to set the host name:
[ root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local"[root@linuxtechi ~]# exec bash
[ root@linuxtechi ~]#
Edit the /etc/hosts
file and add the following line:
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
Use the following command to set up the Kibana repository:
[ root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo
[ elasticsearch-7.x]
name=Elasticsearch repository for7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[ root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Execute the yum
/dnf
command to install kibana:
[ root@linuxtechi ~]# yum install kibana -y
Configure Kibana by editing the /etc/kibana/kibana.yml
file:
[ root@linuxtechi ~]# vim /etc/kibana/kibana.yml
…………
server.host:"kibana.linuxtechi.local"
server.name:"kibana.linuxtechi.local"
elasticsearch.hosts:["http://elasticsearch1.linuxtechi.local:9200","http://elasticsearch2.linuxtechi.local:9200","http://elasticsearch3.linuxtechi.local:9200"]
…………
Enable and start the kibana service:
root@linuxtechi ~]# systemctl start kibana
[ root@linuxtechi ~]# systemctl enable kibana
Allow Kibana port "5601" on the system firewall:
[ root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp
success
[ root@linuxtechi ~]# firewall-cmd --reload
success
[ root@linuxtechi ~]#
Use the following URL to access the Kibana interface: http://kibana.linuxtechi.local:5601
From the panel, we can check the status of the Elastic Stack cluster.
This proves that we have successfully installed and set up a multi-node Elastic Stack cluster on RHEL 8 /CentOS 8.
Now let us send some logs from other Linux servers to the logstash node through filebeat
. In my example, I have a CentOS 7 server, and I will push all important logs of the server to logstash through filebeat
.
Log in to the CentOS 7 server and use the yum/rpm command to install the filebeat package:
[ root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...1:filebeat-7.3.1-1 ################################# [100%][root@linuxtechi ~]#
Edit the /etc/hosts
file and add the following:
192.168.56.20 logstash1.linuxtechi.local
192.168.56.30 logstash2.linuxtechi.local
Now configure filebeat
so that it can use load balancing technology to send logs to the logstash node, edit the file /etc/filebeat/filebeat.yml
, and add the following parameters:
In the filebeat.inputs:
section, change enabled: false
to enabled: true
, and specify the location of the log file that we can send to logstash under the paths
parameter; comment out output.elasticsearch
and host
parameter; delete the comments of output.logstash:
and hosts:
, and add two logstash nodes in the hosts
parameter, and set loadbalance: true
.
[ root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml
filebeat.inputs:- type: log
enabled:true
paths:-/var/log/messages
- /var/log/dmesg
- /var/log/maillog
- /var/log/boot.log
# output.elasticsearch:
# hosts:["localhost:9200"]
output.logstash:
hosts:["logstash1.linuxtechi.local:5044","logstash2.linuxtechi.local:5044"]
loadbalance:true
Use the following two systemctl
commands to start and enable the filebeat
service:
[ root@linuxtechi ~]# systemctl start filebeat
[ root@linuxtechi ~]# systemctl enable filebeat
Now go to the Kibana user interface and verify that the new index is visible.
Select the management option from the left sidebar, and then click Index Management under Elasticsearch:
As we saw above, the index is now visible, let's create the index model now.
Click "Index Patterns" in the Kibana section, it will prompt us to create a new model, click "Create Index Pattern" and specify the pattern name as "filebeat":
Click Next.
Select "Timestamp" as the time filter of the index model, and then click "Create index pattern":
Now click to view the real-time filebeat index model:
This indicates that the Filebeat agent has been successfully configured, and we can see the real-time log on the Kibana dashboard.
The above is the whole content of this article. For these steps to help you set up an Elastic Stack cluster on RHEL 8 / CentOS 8 systems, please don't hesitate to share your feedback and opinions.
via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
to sum up
The above is the method of establishing a multi-node Elastic stack cluster on RHEL8 /CentOS8 introduced by the editor. I hope it will be helpful to you. If you have any questions, please leave me a message. The editor will reply to you in time. Thank you very much for your support to the ZaLou.Cn website!
If you think this article is helpful to you, welcome to reprint, please indicate the source, thank you!
Recommended Posts