Below is a very quick and easy install to get Elasticsearch 5.4.x up and running on CentOS 7. There are other sites that will explain every step in greater detail and this is not intended for that purpose, instead it is a quick install to get Elasticsearch installed. This is based on the install scripts/instructions used for production Elasticsearch environments.
The install documented here is based on a fresh install of CentOS 7 x86_64 Minimal
When you build your CentOS server do not create a swap file, this is the easiest and best way to keep the JVM from swapping and it is unnecessary on a dedicated Elasticsearch server.
First, make sure your CentOS install is up to date
sudo yum update sudo yum upgrade -y
sudo yum install java -y
Download and install Elasticsearch and then install the x-pack plugin
sudo wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.3.rpm sudo rpm -ivh elasticsearch-5.4.3.rpm sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack
Change the JVM heap size for Elasticsearch. The below code snippet will change the JVM heap to 30GB… which assumes your Elasticsearch server has 64GB of RAM. Do not set it to more then 30GB. If you have less then 64GB of RAM in your server then provision the JVM heap at 50% of total RAM and replace the “30” in the two lines below with that new number. It is very important to have enough RAM for the OS to use as a filesystem cache and not have all of the RAM in the system dedicated to the Elasticsearch JVM. If this node is a Master or a Client node (no data), then the JVM size can be set much larger. Use the available RAM – 4GB (for the OS). On a 32GB server, the JVM can have 28GB assigned pretty safely. Be certain to leave enough RAM for any other running applications (Kibana/Logstash) on the same server as the Elasticsearch Client node.
sudo sed -i -e 's#Xms2g#Xms30g#' /etc/elasticsearch/jvm.options sudo sed -i -e 's#Xmx2g#Xmx30g#' /etc/elasticsearch/jvm.options
Create a directory for the Elasticsearch datastore and set ownership on it. Ideally you would have a separate drive or partition mounted at this location.
sudo mkdir /mount sudo mkdir /mount/elasticsearch sudo chown elasticsearch.elasticsearch /mount/elasticsearch -R
Add the firewall rules to allow Elasticsearch nodes to communicate with each other. The below only allows port 9300 which is used for cluster communications. Port 9200 is used for client connections and is not allowed. As a best practice, do not have client connections made directly to a data or master Elasticsearch node and use a client node instead. This client node can be a dedicated server running Elasticsearch only as a client, or it can be Elasticsearch configured to run as a client installed on the same server as Kibana/Logstash/xyz_applciation.
firewall-cmd --permanent --add-port=9300/tcp firewall-cmd --reload
Configure your Elasticsearch. Below are sample configs for a master node, a data node, and a client node. Master node is 10.1.1.10. Data node is 10.1.1.20. Client node is 10.1.1.30. This is the complete config that you need for a functioning Elasticsearch install.
sudo vi /etc/elasticsearch/elasticsearch.yml
# Master node cluster.name: myelasticsearchcluster node.name: ES-master-1 node.data: false node.master: true path.data: [ "/mount/elasticsearch/" ] path.logs: "/var/log/elasticsearch/" network.host: [ "10.1.1.10", "127.0.0.1" ] http.port: 9200 transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: [ "10.1.1.10" ] discovery.zen.minimum_master_nodes: 1
# Data node cluster.name: myelasticsearchcluster node.name: ES-data-1 node.data: true node.master: false path.data: [ "/mount/elasticsearch/" ] path.logs: "/var/log/elasticsearch/" network.host: [ "10.1.1.20", "127.0.0.1" ] http.port: 9200 transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: [ "10.1.1.10" ] discovery.zen.minimum_master_nodes: 1
# Client node cluster.name: myelasticsearchcluster node.name: ES-client-1 node.data: false node.master: false path.data: [ "/mount/elasticsearch/" ] path.logs: "/var/log/elasticsearch/" network.host: [ "10.1.1.30", "127.0.0.1" ] http.port: 9200 transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: [ "10.1.1.10" ] discovery.zen.minimum_master_nodes: 1
Ideally you want more then one master node and there isn’t a reason to have more then three under most circumstances. Changing the two below lines on *all* nodes in additon to installing two more master nodes will change the cluster to having three dedicated masters.
discovery.zen.ping.unicast.hosts: [ "10.1.1.10", "10.1.1.11", "10.1.1.12" ] discovery.zen.minimum_master_nodes: 2
Enable the Elasticsearch service to start on boot and then start it manually. There is no particular order that you must start the nodes up in, but the cluster will not initialize and start until the minimum number of master nodes is online.
sudo systemctl enable elasticsearch sudo systemctl start elasticsearch
Now check your cluster. These can be ran from any node. First will show you the cluster status and health which should be “green”. Second will show a list of nodes in the cluster.
curl localhost:9200/_cat/health?v curl localhost:9200/_cat/nodes?v