Elastic Stack is a great open-source tool. In fact, it is the world’s most popular log management platform. It provides countless features like scalability, speed, high availability, and the most important, a vast community of enthusiasts! So, an obvious decision is to try it yourself and perhaps propose to install it internally in your company or your own laptop to gain all the functionalities it provides. But before you start installing Elasticsearch, Logstash, and Kibana, you should make one step behind and consider utilizing Elastic Stack as a service like Logstail

The most important question when it comes to using open-source software is many times the same: Install it on my own or go to a cloud-based solution? Based on our experience, we will try to explain the benefits of each solution and come to a conclusion on whether it is effective or not. 

Installing Elastic Stack

The initial step is straightforward. You can rely on the free available online documentation and a great resource is of course our own article on how to install Elastic Stack. Getting up and running with your first instances of Elasticsearch, Logstash, Kibana, and Beats (usually Filebeat or Metricbeat) is pretty straightforward, and there is plenty of documentation available if you encounter issues during installation (see our Elasticsearch tutorial, Logstash tutorial, and Kibana tutorial for help).

However, things are not always straightforward. Depending on where you decided to install the stack locally, on the cloud, or hybrid, you may face various issues, from the configuration to networking. Elasticsearch, Logstash, Kibana may not always cooperate as intended, causing data not to be analyzed correctly.

The next step is the configuration of the pipeline into the stack. This step involves complexity because almost always there are multiple data sources from which you are pulling the logs. System logs, web server logs, application logs, or database logs. Configuring all these various integrations and pipelines is complicated and the errors can prevent you from taking the results you wish.

Log parsing, mapping, scaling, and tuning

Many times in Logstail we believe that log management that produces useful insights is an art. And this is because the shipping of logs maybe sometimes easy, but analyzing them in a structured way to make sense is much more difficult.

The procedure to actually get the insights you need involves parsing, mapping, scaling, and finally tuning of your logs. 

Parsing is the process of splitting data into chunks of information that are easier to manipulate and store and in our case involves configuring Logstash to use a grok filter. This process can become very time-consuming. And you must not forget that logs are dynamic and they change in their format requiring periodic configurations. 

Mapping is the next step in successful log management. Elasticsearch mapping defines the different types that reside within an index, defining the fields for documents of a specific type and how the fields should be indexed and stored in Elasticsearch. Dynamic mapping is the default but as with parsing, if logs change and you index documents with a different mapping, they will not be indexed by Elasticsearch. Again if some terms look not familiar to you, please refer to our article here.

As we move closer to the production environment things are getting more demanding. Scaling is the next move to ensure high availability and scalability. If an event occurring in production that may cause a sudden spike in traffic, also the logs being generated will increase. To face these cases we install additional components on top (or in front) of Elastic Stack. Every production Elastic deployment nowadays includes a queuing system in front of Logstash to ensure that bottlenecks are not formed during periods of high traffic. These instances usually are Redis or Kafka and you can imagine that they significantly increase the administrative effort.

Fine-tuning refers to the optimization actions that are required to achieve the best performance. This consists of one of the last steps before gaining all the benefits for Elastic Stack, but it also requires a high degree of experience. Every component of your installation (and they are quite a few) must cooperate and achieve scalability, speed, and high availability for the ELK Stack. For example, the number of indices handled by Elasticsearch greatly affects performance, so, old or unused indices have to be removed. Also, fine-tuning the size of shards or configuring partition for unused indices are tasks that affect the performance of your Elastic Stack installation.

Retention, archiving, upgrading, and choice of infrastructure

After your data has been successfully ingested, the retention is the next consideration. If not correctly configured, indices may pile up and eventually cause Elasticsearch to crash. In such situations, you can either scale up or manually remove old indices. Elasticsearch Curator is the best possible solution to handle them but again the administrative effort involved is significant.

Log archiving is also a consideration. Curation nowadays is a compliance requirement and logs must be archived in their original formats. Cloud-hosted Elastic solutions like Logstail also provide this service!

Upgrading Elastic Stack is considered one of the major issues you must consider if you have deployed Elastic Stack on your own. Complicate concepts not performed in the correct manner like replication or data synchronization when upgrading Elasticsearch can cause loss of data. And things can become even more puzzling if you are running a multi-node cluster.

Some serious planning is involved with the topic of infrastructure. The best approach is to buy infrastructure equipment thinking about future business growth.  But planning involves many factors and doesn’t always represent real needs. You have to be ready to buy twice the same piece of equipment for example just because the storage is not enough. You have to take into consideration that log management systems consume large amounts of CPU, memory, storage, and network bandwidth. The underlying infrastructure needed can cost you a lot!


Security is always an issue and for us, in Logstail it is even more important! Your log files contain sensitive information about you, your company, and your customers. So you expect your data to be safe. As a result, authorization, authentication, and other features protect the logs coming into Elastic Stack, protecting the assets of your company.

Elastic Stack does not provide easy ways to implement enterprise-grade data protection strategies. Elastic Stack today is used extensively for PCI compliance and SIEM but does not include proper security functionality out of the box. You can use the provided basic security features like LDAP/AD support, Single Sign-On, encryption at rest, and more, but all these needs experienced Elastic administrators and a relatively big learning curve.


The question we placed at the beginning of the article was “Can you deploy and run an Elastic Stack on your own?”. The answer is yes you can do it. But the defining factor whether to run it on your own or go to a cloud-hosted company like Logstail, are the resources you can assign to this task. And not forget that deploying and running Elastic Stack is not an one-off project. It requires constant effort to be sharp and producing the insights you wish.

In conclusion, the overall cost of running your own Elastic Stack is relatively higher compared to a cloud-hosted Elastic solution like Logstail.

Logstail with its advanced features brings the functionality of centralized monitoring to your hands. You don’t have to be an engineer in order to set up and use Elasticsearch anymore. Now you can convert your data into actionable insights with just some tweaks. You can maximize the performance of your infrastructure or be notified of potential problems and take the appropriate actions. Sign-up for a free demo in order to realize the power of Logstail

 Contact Our Expertsor Sign Up for Free

0 0 votes
Article Rating