ElasticSearch is a search server based on Lucene. It provides a full-text search engine with distributed multi-user capabilities, based on a RESTful web interface. Elasticsearch is developed in Java and released as an open source under the terms of the Apache license. It is a popular enterprise search engine. Designed for use in [cloud computing] (https://baike.baidu.com/item/%E4%BA%91%E8%AE%A1%E7%AE%97/9969353), it can achieve real-time search, stable, reliable, fast, easy to install and use.
This article uses centos7.5 (CentOS-7-x86_64-Minimal-1804) system.
Please make sure that the machine has 2G memory. Because elasticsearch will occupy 1G of memory.
yum install docker
yum install -y docker-io
Need to add domestic mirror source
vim /etc/docker/daemon.json
The default content is {}, and the modification effect is as follows:
{" registry-mirrors":["https://registry.docker-cn.com"]}
Restart the docker service
systemctl restart docker
Install docker command completion tool
yum install -y bash-completion
**Note: You must log out of the terminal and log in again to take effect. **
docker pull centos
This mirror is centos7
Download rpm and install it
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.rpm
Start a container with centos image
docker run -it docker.io/centos /bin/bash
After entering the container, first install wget and java, and clean up the rpm package
yum install -y wget java-1.8.0-openjdk && yum clean all
Download the rpm package of elasticsearch and install it
rpm -ivh elasticsearch-6.2.4.rpm && rm -f elasticsearch-6.2.4.rpm
Modify the configuration file
sed -i '55s/#network.host: 192.168.0.1/network.host: 0.0.0.0/g'/etc/elasticsearch/elasticsearch.yml
sed -i '59s/#http.port: 9200/http.port: 9200/g'/etc/elasticsearch/elasticsearch.yml
Start elasticsearch service
runuser -s /bin/bash -l elasticsearch -c "/usr/share/elasticsearch/bin/elasticsearch"
Note: You cannot use the systemctl command to start the elasticsearch service, you must run it in privileged mode!
For example, docker run -it docker.io/centos privileged=true /bin/bash
Create an empty directory and compile the file Dockerfile
mkdir /opt/elasticsearchvi /opt/elasticsearch/Dockerfile
The content is as follows:
FROM centos
RUN yum install -y wget java-1.8.0-openjdk && yum clean all && \
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.rpm && \
rpm -ivh elasticsearch-6.2.4.rpm && rm -f elasticsearch-6.2.4.rpm && \ sed -i '55s/#network.host: 192.168.0.1/network.host: 0.0.0.0/g'/etc/elasticsearch/elasticsearch.yml && \ sed -i '59s/#http.port: 9200/http.port: 9200/g'/etc/elasticsearch/elasticsearch.yml
EXPOSE 9200ENTRYPOINT runuser -s /bin/bash -l elasticsearch -c "/usr/share/elasticsearch/bin/elasticsearch"
**Note: Every time RUN is executed, the image will add one layer. The more layers, the larger the mirror volume. **
In order to avoid multiple RUNs, the related commands are unified into one RUN.
EXPOSE 9200 indicates the port number to be exposed
ENTRYPOINT indicates the command to be executed by default after the mirror is run
runuser specifies the user to execute commands.
docker build -t elasticsearch /opt/elasticsearch
docker run -it elasticsearch
By default, the command runuser -s /bin/bash -l elasticsearch -c "/usr/share/elasticsearch/bin/elasticsearch" will be called directly
The output is as follows:
runuser: warning: cannot change directory to /home/elasticsearch: No such file or directory
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
[2018- 11- 07 T10:18:54,057][INFO ][o.e.n.Node ][] initializing ...[2018-11-07T10:18:54,198][INFO ][o.e.e.NodeEnvironment ][qDmU4u_] using [1] data paths, mounts [[/(rootfs)]], net usable_space [15gb], net total_space [16.9gb], types [rootfs][2018-11-07T10:18:54,198][INFO ][o.e.e.NodeEnvironment ][qDmU4u_] heap size [1015.6mb], compressed ordinary object pointers [true][2018-11-07T10:18:54,202][INFO ][o.e.n.Node ] node name [qDmU4u_] derived from node ID [qDmU4u_NTNKmpXVV-5vlEQ];set[node.name] to override
[2018- 11- 07 T10:18:54,202][INFO ][o.e.n.Node ] version[6.2.4], pid[5], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/3.10.0-862.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_191/25.191-b12][2018-11-07T10:18:54,202][INFO ][o.e.n.Node ] JVM arguments [-Xms1g,-Xmx1g,-XX:+UseConcMarkSweepGC,-XX:CMSInitiatingOccupancyFraction=75,-XX:+UseCMSInitiatingOccupancyOnly,-XX:+AlwaysPreTouch,-Xss1m,-Djava.awt.headless=true,-Dfile.encoding=UTF-8,-Djna.nosys=true,-XX:-OmitStackTraceInFastThrow,-Dio.netty.noUnsafe=true,-Dio.netty.noKeySetOptimization=true,-Dio.netty.recycler.maxCapacityPerThread=0,-Dlog4j.shutdownHookEnabled=false,-Dlog4j2.disable.jmx=true,-Djava.io.tmpdir=/tmp/elasticsearch.C8ZXNqCd,-XX:+HeapDumpOnOutOfMemoryError,-XX:HeapDumpPath=/var/lib/elasticsearch,-XX:+PrintGCDetails,-XX:+PrintGCDateStamps,-XX:+PrintTenuringDistribution,-XX:+PrintGCApplicationStoppedTime,-Xloggc:/var/log/elasticsearch/gc.log,-XX:+UseGCLogFileRotation,-XX:NumberOfGCLogFiles=32,-XX:GCLogFileSize=64m,-Des.path.home=/usr/share/elasticsearch,-Des.path.conf=/etc/elasticsearch][2018-11-07T10:18:55,827][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [aggs-matrix-stats][2018-11-07T10:18:55,827][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [analysis-common][2018-11-07T10:18:55,827][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [ingest-common][2018-11-07T10:18:55,831][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [lang-expression][2018-11-07T10:18:55,831][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [lang-mustache][2018-11-07T10:18:55,831][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [lang-painless][2018-11-07T10:18:55,832][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [mapper-extras][2018-11-07T10:18:55,832][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [parent-join][2018-11-07T10:18:55,832][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [percolator][2018-11-07T10:18:55,832][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [rank-eval][2018-11-07T10:18:55,832][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [reindex][2018-11-07T10:18:55,832][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [repository-url][2018-11-07T10:18:55,833][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [transport-netty4][2018-11-07T10:18:55,833][INFO ][o.e.p.PluginsService ][qDmU4u_] loaded module [tribe][2018-11-07T10:18:55,833][INFO ][o.e.p.PluginsService ][qDmU4u_] no plugins loaded
[2018- 11- 07 T10:19:00,949][INFO ][o.e.d.DiscoveryModule ][qDmU4u_] using discovery type [zen][2018-11-07T10:19:02,075][INFO ][o.e.n.Node ] initialized
[2018- 11- 07 T10:19:02,075][INFO ][o.e.n.Node ][qDmU4u_] starting ...[2018-11-07T10:19:02,531][INFO ][o.e.t.TransportService ][qDmU4u_] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}[2018-11-07T10:19:02,567][INFO ][o.e.b.BootstrapChecks ][qDmU4u_] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018- 11- 07 T10:19:05,811][INFO ][o.e.c.s.MasterService ][qDmU4u_] zen-disco-elected-as-master([0] nodes joined), reason: new_master {qDmU4u_}{qDmU4u_NTNKmpXVV-5vlEQ}{Terj8KYoQvWwHYsUYkNNyA}{172.17.0.2}{172.17.0.2:9300}[2018-11-07T10:19:05,829][INFO ][o.e.c.s.ClusterApplierService][qDmU4u_] new_master {qDmU4u_}{qDmU4u_NTNKmpXVV-5vlEQ}{Terj8KYoQvWwHYsUYkNNyA}{172.17.0.2}{172.17.0.2:9300}, reason: apply cluster state(from master [master {qDmU4u_}{qDmU4u_NTNKmpXVV-5vlEQ}{Terj8KYoQvWwHYsUYkNNyA}{172.17.0.2}{172.17.0.2:9300} committed version [1] source [zen-disco-elected-as-master([0] nodes joined)]])[2018-11-07T10:19:05,887][INFO ][o.e.h.n.Netty4HttpServerTransport][qDmU4u_] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}[2018-11-07T10:19:05,887][INFO ][o.e.n.Node ][qDmU4u_] started
[2018- 11- 07 T10:19:05,897][INFO ][o.e.g.GatewayService ][qDmU4u_] recovered [0] indices into cluster_state
It will hold for one value and monitor port 9200
But generally, we need to do a port mapping between the machine and the container, to start the container like this
docker run -p 9200:9200-d -it el
p represents port mapping, hostPort: containerPort, the left side is the local machine, the right side is the container
d means running in the background
Note: These 2 parameters should be written in the front, not in the back
Wait 10 seconds to check the port status
[ root@localhost el]# netstat -anpt
Active Internet connections(servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 000.0.0.0:220.0.0.0:* LISTEN 813/sshd
tcp 00127.0.0.1:250.0.0.0:* LISTEN 1190/master
tcp 00192.168.91.133:22192.168.91.1:56367 ESTABLISHED 11374/sshd: root@pt
tcp6 00:::9200:::* LISTEN 17942/docker-proxy- tcp6 00:::22:::* LISTEN 813/sshd
tcp6 00::1:25:::* LISTEN 1190/master
Through the above information, you can see the port up
Access url
http://192.168.91.133:9200/
Page output:
{" name":"-sawdKe","cluster_name":"elasticsearch","cluster_uuid":"_7kUiLEyQBSnLQSOGxijtw","version":{"number":"6.2.4","build_hash":"ccec39f","build_date":"2018-04-12T20:37:28.497551Z","build_snapshot":false,"lucene_version":"7.2.1","minimum_wire_compatibility_version":"5.6.0","minimum_index_compatibility_version":"5.0.0"},"tagline":"You Know, for Search"}
Remarks:
If you don’t use a container to install elasticsearch, just use the following command to start elasticsearch
systemctl daemon-reload
systemctl start elasticsearch.service
systemctl enable elasticsearch.service
Don’t be naive to think that it runs as root user
Look at the /usr/lib/systemd/system/elasticsearch file, it defines the running user as elasticsearch
So the actual running user is still elasticsearch
Recommended Posts