Elastic Search
Elasticsearch or simply ELK stack, according to the official website definition, is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. Elasticsearch is built on Apache Lucene and was first released in 2010 by Elasticsearch N.V. (now known as Elastic).
The Kibana, Logstash, soon became a part of the tools that make the Elasticsearch an industry standard for monitoring, and logging infrastructure and application data. The 3 (and more recently includes Beats to make it 4) service is christianed the ELK Stack or Elastic Stack, or simply ELK (that is Elasticsearch, Logstash, and Kibana).
The Elasticsearch service has evolve now to include some more varied collection of lightweight shipping agents called Beats which sends data to Elasticsearch.
That been said, let us start by installing Elasticsearch
1. Add Elastic’s signing key so that the downloaded package can be verified (skip this step if you’ve already installed packages from Elastic)$wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
2. We need to then install the apt-transport-https package:
$sudo apt-get update && sudo apt-get install apt-transport-https
3. Add the repository definition to your system:$echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
4. Install a version of Elasticsearch ELK that contains only features licensed under Apache 2.0 (aka OSS Elasticsearch)$echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
5. update your repositories and install Elasticsearch:$sudo apt-get update && sudo apt-get install elasticsearch
6. Open the yml configuration file to bind Elasticsearch to either a private IP or localhost.$sudo vim /etc/elasticsearch/elasticsearch.yml
In the file, find and uncomment the below block. Remove # to uncomment.network.host: "localhost"
http.port:9200
cluster.initial_master_nodes: ["<PrivateIP"]
Install and Configure Logstash
1. First make sure java is installed in the machine.$java -version
Result: openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.16.04.1-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
Else install with: $sudo apt-get install default-jre
2. Install logstash$sudo apt-get install logstash
3. Configure logstash with data. Create a logstash configuration file in the logstash directory$sudo nano /etc/logstash/conf.d/apache-01.conf
Then download this sample data file and import the log data to logstash. Add the path where it is downloaded in the command below:$input { file { path => "/home/ubuntu/apache-daily-access.log" start_position => "beginning" sincedb_path => "/dev/null" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } date { match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] } geoip { source => "clientip" } } output { elasticsearch { hosts => ["localhost:9200"] } }
4. Create a configuration file called 02-beats-input.conf where you will set up your Filebeat input:$sudo nano /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
}
}
5. Next, create a configuration file called 30-elasticsearch-output.conf:$sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf
Insert the following output configuration. Essentially, this output configures Logstash to store the Beats data in Elasticsearch
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
}
6. Test to be sure there is no syntax error in your logstash configuration.$sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash
-t
7. Then start the logstash service$sudo service logstash start
In case the above command returns Logstash: Unrecognized service
In a situation where you get an error like “logstash not installed” after installing and running the logstash service, do not panic, simply install the system/service package for logstash
$sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv
Then run the start command again.
INSTALL AND CONFIGURE KIBANA
1. Install the kibana from the apt repository$sudo apt-get install kibana
$sudo service kibana start
2. Configure username and password for kibana login$echo "kibanaadmin:
openssl passwd -apr1
" | sudo tee -a /etc/nginx/htpasswd.users
Visit http://localhost:5601/status to see information about your server/host
INSTALL AND CONFIGURE BEATS
1. Beats are several lightweight data shippers that collects data from various sources and transport them to Logstash or Elasticsearch.
Examples includes:
Filebeat for files, Metricbeat for metric data, Packetbeat that analyze network data, Winlogbeat for windows events, Auditbeat audit framework and file integrity, Heartbeat for service availability and active probing.
We begin by installing and configuring the filebeat$ sudo apt install filebeat
2. Open the filebeat.yml file in etc directory to comment out the elasticsearch output, this is because file beat will not send direct data to elasticsearch. It rather will send it to logstash.
$sudo nano /etc/filebeat/filebeat.yml
Scroll to Elasticsearch Output section and add the # sign to comment out the output.elasticsearch and hosts line.
output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
Just below, in the Logstash Output section, uncomment the logstash host by removing the # sign from the hosts: [localhost:5044]
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
Once that is done, you can then save, and close your file editor.
Crtl X, Y Enter.
More filebeat modules can be found in https://www.elastic.co/guide/en/beats/filebeat/7.6/filebeat-modules.html
Then start filebeat$sudo service filebeat start
Next, enable the filebeat system$sudo filebeat modules enable system
Check the list of modules in the filebeat$sudo filebeat modules list
You can see the parameters of the module in the /etc/filebeat/modules.d/system.yml configuration file.
Now, we should Setup filebeat ingestion pipeline which will parse log data before sending it to elasticsearch through logstash
$sudo filebeat setup --pipelines --modules system
Install and configure the Metricbeat$sudo apt-get install metricbeat
$sudo service metricbeat start
Leave a Reply