HAProxy (High Availability Proxy) is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for very high traffic web sites and powers quite a number of the world’s most visited ones. Over the years it has become the de-facto standard opensource load balancer, is now shipped with most mainstream Linux distributions, and is often deployed by default in cloud platforms.

In this guide, we will provide a general overview of the HAProxy logging philosophy, and examples of how it might be used to improve the performance and reliability of your own server environment an operational and security perspective.

In order to analyze HAProxy logs we will use ELK Stack along with the cloud service of Logstail.com. The ELK Stack (Elasticsearch, Logstash, Kibana and Beats) is the world’s most popular open-source log management and log analysis platform, and offers engineers with an extremely easy and effective way of monitoring HAProxy. To complete the steps shown in this guide you will need HAProxy installed and active, and either of the following:

  1. ELK Stack installed and configured by you
  2. Or just a Logstail.com account!

The basics of HAProxy logging

HAProxy has the ability to provide a fine level of information, which is very important for troubleshooting complex environments. Standard information provided in logs include client ports, TCP/HTTP state timers, precise session state at termination and precise termination cause, information about decisions to direct traffic to a server, and of course the ability to capture arbitrary headers.

In order to improve administrators reactivity, it offers great transparency about encountered problems, both internal and external, and it is possible to send logs to different sources at the same time with different level filters :

  – global process-level logs (system errors, start/stop, etc..)

  – per-instance system and internal errors (lack of resource, bugs, …)

  – per-instance external troubles (servers up/down, max connections)

  – per-instance activity (client connections), either at the establishment or at the termination.

  – per-request control of log-level, e.g.  http-request set-log-level silent if sensitive_request

In the following image we can see the severity levels of the HAProxy logs 

We can adjust the severity level of the logs we want to receive in the configuration file of the HAProxy, the haproxy.cfg which is located in /etc/haproxy/ directory of our machine (unless we have not installed haproxy package at all). In this example we have configures our load balancer to receive all logs (after local0 there is no value)

The general flow of the logs is depicted in the following image. HAProxy produces logs and writes them to /dev/log/. Rsyslog read them from the symbolic link that has been created inside the Systemd Journal. The final step is for the rsyslog to write them to a file, /var/log/haproxy.log by default. 

 

The rsyslog configuration file is located in /etc/rsyslog.d/ in which there is the default file where the logs are written. If we want, we can change this file.

 

 

How to ship your logs to ELK Stack (or Logstail.com)

The most user-friendly way to ship your HAProxy logs into the ELK Stack (or Logstail.com) is with the use of Filebeat. Filebeat belongs to the family of Beats which are open source data shippers that you install as agents on your server to send operational data to Elasticsearch. In this link, you will find an excellent guide on how to ship your logs with Filebeat. In essence, there is no real need to add Logstash to handle the processing, which makes the set-up of the pipeline much simpler. Instead of ELK Stack you can choose Logstail.com to ship your logs directly without the need to use Filebeat HAProxy module (this is even more easier!).

1) Installing Filebeat to HAProxy

First, add Elastic’s signing key so that the downloaded package can be verified (skip this step if you’ve already installed packages from Elastic):

Next, add the repository definition to your system

Update and install Filebeat with:

2) Enable the HAProxy Filebeat module (not necessary in Logstail.com case)

Another option to ship your logs to ELK Stack is by enabling the HAProxy Filebeat module (not needed in Logstail.com case). Be aware that this module is not available in Windows. To do this, enter:

Next, use the following setup command to load a recommended index template and deploy sample dashboards for visualizing the data in Kibana:

You can further refine the behavior of the HAProxy module by specifying variable settings in the modules.d/haproxy.yml file, or overriding settings at the command line. The module is by default configured to run via syslog on port 9001. However, it can also be configured to read from a file path. See the following example

last but not least, start Filebeat with:

It’s time to verify that your pipeline working as expected. First, cURL the Elasticsearch to verify a “filebeat-*” index has indeed been created:

 

Steps to ship HAProxy logs to Logstail.com

To make things even easier, Logstail.com gives you the opportunity to automatically parse the HAProxy logs, without the need of Logstash or Filebeat’s HAProxy module. In order to operationally use your logs, all you have to do is install Filebeat to HAProxy and modify the Filebeat.yml configuration file on your Logstail.com:

1) Download the SSL certificate

Firstly, to secure ship your logs to Logstail.com (encrypted), you have to download our public SSL certificate:

2) Editing Filebeat

Backup your filebeat.yml, create a new one

Open the Filebeat.yml configuration file with your favorite editor (vim or nano):

Paste the following configuration (for Debian, taken from shippers page), and change USER_TOKEN with your account’s token:

Save the file and restart Filebeat with:

Finally, check that HAProxy data is received from your Filebeat by navigating to Kibana page of your account.

How to analyze HAProxy logs

Now you can query your logs with the help of Kibana. Kibana gives you many query options and features like auto-suggest and auto-complete makes searching much easier. For example, you can search with free text. Just enter your search query in the search field as follows (search word: Seattle):

The query options are actually extremely varied depending on your actual needs, which can be analytics, troubleshooting, security and many more.  

 Contact Our Experts

How to visualize them

With Kibana you can instantly visualize your data with dashboards in many different ways. The most frequent uses cases of visualizing HAProxy logs using Kibana are Backend Breakdown, Frontend Breakdown and Response code over time.

For backend and frontend it is very important to have a visualization for breakdowns.  Frontend and backend are two centralized points where monitoring is very effective because when can immediately realize problems if a breakdown occurs and because these are the submitting request points.  Another common visualization used for HAProxy logs monitors response codes over time. Again, this gives you a good picture of normal behavior and can help you detect a sudden spike in error response codes. This helps you monitor regular behavior and identify suspicious traffic. Finally, Logstail.com automatically geo enriches the IP fields within the HAProxy logs so you can use a Coordinate Map visualization to map the requests as shown below:

Apps2Go

Based on your needs you can customize the visualizations in Kibana, and these were just some simple examples of the tool’s capabilities. The creation of a dashboard is the finishing touch after the visualizations are ready. With a comprehensive dashboard, we have an operational overview of the HAProxy. To make things easier we provide Apps2Go which is a library of ready dashboards for HAProxy.

 

Alerting

Alerting is an extremely useful feature provided by Logstail.com. We provide you a mechanism to receive when certain indicators exceed the thresholds been defined. Now you have the ability to immediately realize performance-related or other issues and take the appropriate measures to mitigate the problem. This functionality is a must when you want to have real-time operational awareness of your systems. Email and Slack are currently supported. In our example, we create an alert if the response codes exceed the number of 10000.

After defining the trigger, we can configure the actions for that trigger. In the example, we will choose to be notified by a slack message. 

We can configure the message and eventually produce an effective alerting mechanism for us and our team. You can read a thorough article about how our alerting mechanism works here.

Conclusion

Logstail.com with its advanced features brings the functionality of ELK Stack to your hands. You don’t have to be an engineer in order to set up and use Elasticsearch anymore. Now you can convert your data into actionable insights with just some tweaks. You can maximize the performance of your infrastructure or be notified of potential problems and take the appropriate actions. Sign-up for a free demo in order to realize yourself the power of Logstail.com.  

 Contact Our Expertsor Sign Up for Free

0 0 votes
Article Rating