Ask Sawal

Discussion Forum
Notification Icon1
Write Answer Icon
Add Question Icon

How to aggregate in elasticsearch?

1 Answer(s) Available
Answer # 1 #

The Elastic Stack has four main components.

You will install the Elastic Stack on the server. You will learn how to install all of the components of the Elastic Stack and how to use them to gather and visualize system logs.

We will use Nginx to proxy it so that it will be accessible over the internet. We will install all of these components on a single server, which we will refer to as our Elastic Stack server.

You will need the following to complete this lesson.

It's important that you keep your server secure because Elastic Stack is used to access valuable information that you would not want unauthorized users to access. This is encouraged but not compulsory.

It would make more sense for you to complete the Let's Encrypt on Ubuntu 18.04 guide at the end of the second step because you will make changes to your Nginx server block over the course of this guide. Before configuring Let's Encrypt on your server, you will need to put in place the following:

The Elastic Stack components are not in the default package repository.

Adding Elastic's package source list will allow them to be installed with APT.

All of the Elastic Stack's packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Your package manager will consider packages that have been verified using the key to be trustworthy.

You will need to import the GPG key and add the Elastic package source list to install Elasticsearch.

To import the GPG key, run the following commands.

The Elastic source list should be added to the sources.list.d directory.

Next, update your package lists so that the new Elastic source is read.

This command is used to install Elasticsearch.

If you want to change the main configuration file, use your text editor. Here, we'll use a different method.

On port 9200, Elasticsearch listens for traffic. You should restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through the REST API. If you find the line that specifies network.host, uncomment it, and replace its value with localhost, it looks like this.

If you're using a different type of computer, you can save and close elasticsearch.yml by pressing the buttons above. Then, start the service with systemctl.

Next, run the following command to start up your server.

You can test your service by sending it a request.

You will see a response with some basic information about your local area.

Now that the Elastic Stack is up and running, let's install Kibana.

You should only install Kibana after installing Elasticsearch according to the official documentation.

This order ensures that the components are in place.

The remaining components of the Elastic Stack can be installed using apt, because you've already added the Elastic package source.

The service should be enabled and started.

We need to set up a reverse proxy to allow external access to it because it is only configured to listen on localhost. We will use the server's already installed Nginx for this purpose.

To access the Kibana web interface, you'll need to create an administrative Kibana user. If you want to make your account more secure, you should choose a non-standard name for your user that would be difficult to guess.

The following command will create the administrative Kibana user and password.

You will need this password and usernames to use Nginx.

At the prompt, enter and confirm a password. You will need this login to access the Kibana web interface.

Next, we will create a server block file.

We will refer to this file as example.com, although you may find it helpful to give yours a more descriptive name. You could name this file after your FQDN if you have a FQDN record for this server.

Updating example.com to match your server's FQDN or public address is necessary to add the following code block. The code that is used to set up Nginx directs your server's traffic to the Kibana application, which is listening on localhost:5601. The htpasswd.users file is read by Nginx and requires basic credentials.

You may have already created this file and populated it with some content if you followed the prerequisite Nginx tutorial. Before adding the following, it is advisable to remove all the existing content from the file.

Save and close the file when you are done.

The new configuration can be enabled by creating a symbolic link to the directory. You don't need to run this command if you already created a server block file with the same name.

Check the configuration for errors.

Double check that the content you put in your configuration file was added correctly if there are any errors reported in your output. Go ahead and restart the service if you see that the syntax is ok.

If you followed the initial server setup guide, you should have a UFW firewall on. We can change the rules by typing.

You can now access Kibana via your Elastic Stack server. You can check the status of the server by going to the address listed and entering your credentials.

The status page shows information about the server's resources and installed software.

The dashboard is configured, let's install Logstash.

We recommend using Logstash to process the data since it's possible for Beats to send data directly to the Elasticsearch database. This will allow you to collect data from different sources, convert it into a common format, and export it to another database.

This command will install Logstash.

You can change it after installing Logstash.

The configuration files are located in the /etc/logstash/conf.d directory. It's helpful to think of Logstash as a pipeline which takes in data at one end, processes it in one way or another, and sends it out to its destination, in this case, the destination being Elasticsearch. There are two required elements, input and output, and one optional element, filter. The input, filter, and output plugins consume data from a source, process it, and write it to a destination.

You will need to create a configuration file to set up your Filebeat input.

The following configuration should be inserted. The beats input that will listen on the port is specified.

You can save and close the file.

Next, create a configuration file called 10-syslog-filter.conf, where we will add a filter for system logs.

The following configuration should be inserted. The system logs configuration was taken from the official documentation. The incoming system logs are structured and usable by the Kibana dashboards using this filter.

When finished, save and close the file.

The 30-elasticsearch-output.conf is a configuration file.

The output configuration should be inserted.

This output is used to set Logstash to store the Beats data in an index named after the Beat. The Beat used in this is Filebeat.

Save and close the file.

If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they're sorted between the input and the output configuration, so that the file names begin with a two-digit number between 2 and 30.

This command will help you test your Logstash configuration.

After a few seconds, your output will display Configuration OK if there are no errors. If you don't see any errors in your output, you should update your configuration to correct them.

Logstash will put the configuration changes into effect if your test is successful.

[0]
Edit
Query
Report
Niki Pitts
Baker