2017-11-19

Load Balancer with “LVS + Keepalived + DSR”

> to Japanese Pages

1. Summary

In this post, I will explain the effectiveness of the load balancer solution by “LVS + Keepalived + DSR” design technology and explain how to build it.

2. Introduction

The load balancer solution by “LVS + Keepalived + DSR” is a mature technology but I have posted this solution because I was asked by my friends. For highly scalable projects, the topic of the load balancer is an agenda at least once in the system performance meeting. I have done a lot of such experiences. And we will have the opportunity to hear negative opinions about the performance of the software load balancer. In such a case, the name of a hardware load balancer like BIG-IP sometimes comes up to the topic of that agenda. However, we can not miss the fact that a load balancer using “LVS + Keepalived + DSR” design technology runs at 100% SLA and 10% load factor in our project receiving 1 million accesses per day. This demonstrates that this design technology is one of the effective load balancer solutions in cloud hosting without load balancer PaaS or on premises. Such a result is brought about by using the communication method called Direct Server Return (DSR). The dramatic load reduction of the load balancer is realized by the feature of “returning it directly to the client without going through communication from the lower node” of the DSR. In addition, this solution is not affected by various hardware related problems (failure, deterioration, support contract, support quality, end of product support, etc.). In this post, I will explain how to build “LVS + Keepalived + DSR” design. In addition, in this post, I will not specifically conduct benchmarks such as “DSR VS. Not DSR”.

3. Environment

In this post, I will explain the solution based on the following assumptions.
CentOS 7
Keepalived
ipvsadm
Firewalld
In this post, I will explain the solution based on the following system configuration diagram.

4. Install

First, we install the “Keeplived” on the Load Balancer 1.
$ sudo yum -y install keepalived
Next, we install the “Keeplived” on the Load Balancer 2.
$ sudo yum -y install keepalived
Next, we install the “ipvsadm” on the Load Balancer 1.
$ sudo yum -y install ipvsadm
Next, we install the “ipvsadm” on the Load Balancer 2.
$ sudo yum -y install ipvsadm

5. Configuration

Next, we configure the “firewalld” on the Web Server 1. We startup the “firewalld” and enable it.
$ sudo systemctl start firewalld
$ sudo systemctl enable firewalld
$ sudo systemctl status firewalld
We configure the “firewalld.”
$ sudo firewall-cmd --set-default-zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=80/tcp --zone=internal
$ sudo firewall-cmd --add-port=80/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=443/tcp --zone=internal
$ sudo firewall-cmd --add-port=443/tcp --zone=internal --permanent
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.3 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.3 -j REDIRECT
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
We reload the “firewalld” and confirm the configuration.
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all-zone
$ sudo firewall-cmd --direct --get-rule ipv4 nat PREROUTING
We use the “telnet” command to verify the communication of the Web Server 1.
$ sudo telnet 10.0.0.3 80
Next, we configure the “firewalld” on the Web Server 2. We startup the “firewalld” and enable it.
$ sudo systemctl start firewalld
$ sudo systemctl enable firewalld
$ sudo systemctl status firewalld
We configure the “firewalld.”
$ sudo firewall-cmd --set-default-zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=80/tcp --zone=internal
$ sudo firewall-cmd --add-port=80/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=443/tcp --zone=internal
$ sudo firewall-cmd --add-port=443/tcp --zone=internal --permanent
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.4 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.4 -j REDIRECT
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
We reload the “firewalld” and confirm the configuration.
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all-zone
$ sudo firewall-cmd --direct --get-rule ipv4 nat PREROUTING
We use the “telnet” command to verify the communication of the Web Server 2.
$ sudo telnet 10.0.0.4 80
Next, we configure the “Keepalived” on the Load Balancer 1.
$ sudo cp -a /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.org
$ sudo vim /etc/keepalived/keepalived.conf
; Common Configuration Block
global_defs {
    notification_email {
        alert@example.com
    }
    notification_email_from lb1@example.com
    smtp_server mail.example.com
    smtp_connect_timeout 30
    router_id lb1.example.com
}

; Master Configureation Block
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 1
    priority 101
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass foo
    }
    virtual_ipaddress {
        10.0.0.5/24 dev eth0
    }
}

; Virtual Server Configureation Block
virtusl_server 10.0.0.5 80 {
    delay_loop 6
    lvs_sched rr
    lvs_method DR
    persistence_timeout 50
    protocol TCP
    sorry_server 10.0.0.254 80
    real_server 10.0.0.3 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 10.0.0.4 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
$ sudo systemctl start keepalived
In case of failback prohibition, you should disable automatic startup of “Keepalived”.
$ :sudo systemctl enable keepalived
$ sudo systemctl status keepalived
$ sudo ip addr
Next, we configure the “Keepalived” on the Load Balancer 2.
$ sudo cp -a /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.org
$ sudo vim /etc/keepalived/keepalived.conf
; Common Configuration Block
global_defs {
    notification_email {
        admin@example.com
    }
    notification_email_from lb2@example.com
    smtp_server mail.example.com
    smtp_connect_timeout 30
    router_id lb2.example.com
}

; Backup Configureation Block
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 1
    priority 100
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass foo
    }
    virtual_ipaddress {
        10.0.0.5/24 dev eth0
    }
}

; Virtual Server Configureation Block
virtusl_server 10.0.0.5 80 {
    delay_loop 6
    lvs_sched rr
    lvs_method DR
    persistence_timeout 50
    protocol TCP
    sorry_server 10.0.0.254 80
    real_server 10.0.0.3 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 10.0.0.4 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
$ sudo systemctl start keepalived
In case of failback prohibition, you should disable automatic startup of “Keepalived”.
$ :sudo systemctl enable keepalived
$ sudo systemctl status keepalived
$ sudo ip addr
Next, we change the kernel parameters on the Load Balancer 1.
$ sudo vim /etc/sysctl.conf
# Enable Packet Transfer between Interfaces
net.ipv4.ip_forward = 1

# Do not discard packets from networks that do not belong to the interface.
net.ipv4.conf.all.rp_filter = 0
We reflect the setting of the kernel parameters.
$ sudo sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
We startup the “ipvsadm.”
$ sudo touch /etc/sysconfig/ipvsadm
$ sudo systemctl start ipvsadm
In case of failback prohibition, you should disable automatic startup of “ipvsadm”.
$ :sudo systemctl enable ipvsadm
$ sudo systemctl status ipvsadm
Next, we change the kernel parameters on the Load Balancer 2.
$ sudo vim /etc/sysctl.conf
# Enable Packet Transfer between Interfaces
net.ipv4.ip_forward = 1

# Do not discard packets from networks that do not belong to the interface.
net.ipv4.conf.all.rp_filter = 0
We reflect the setting of the kernel parameters.
$ sudo sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
We startup the “ipvsadm.”
$ sudo touch /etc/sysconfig/ipvsadm
$ sudo systemctl start ipvsadm
In case of failback prohibition, you should disable automatic startup of “ipvsadm”.
$ :sudo systemctl enable ipvsadm
$ sudo systemctl status ipvsadm
We will use the “ipvsadm” command to check the LVS communication settings on the Load Balancer 1.
$ sudo ipvsadm -Ln
We will use the “ipvsadm” command to check the LVS communication settings on the Load Balancer 2.
$ sudo ipvsadm -Ln

6. Conclusion

In this way, we can improve performance degradation against high load, which is a weak point of software load balancer, with the DSR technology.

2017-11-04

Surrogate Key VS. Natural Key

Surrogate Key VS. Natural Key

The other day, I discussed a "Surrogate Key VS. Natural Key" in a development project.

I sometimes come across such discussions.

This will be a brush up of my post in the past but I will post the best solution for this problem.

Furthermore, this is not only the case of the title of this post but also the basic way of thinking and solution of problems for such type of discussion.

If you are suffering about this matter in the design of the RDBMS. For your information.


If we want to solve the problem of this discussion, we must first change the recognition of the surrogate key to a artificial key before you get into the main theme.

First of all, we have to solve from the misunderstanding of "Surrogate Key VS. Natural Key" controversy contagious in the world.

The true meaning of this discussion should be "Artificial Key VS. Natural Key".

A natural key is a primary key designed by a single entity attribute or a combination of a plurality of entity attributes as you know.

A surrogate key is a primary key designed as a substitute for a natural key when it is difficult to design a natural key.

An artificial key is a primary key designed to increment an integer value mechanically, irrespective of the natural key design.

Therefore, even natural key believers, if it is difficult to design a natural key, they use the surrogate key as a matter of course.

However, it can be said that the artificial key faction does not use the natural key almost.

From the above, the misunderstanding of the "Surrogate Key VS. Natural Key" controversy would have been solved.

If you try to advance the discussion while misunderstanding this, there is a possibility that the argument may go off, so it would be better to first be aware of the misunderstanding.

Therefore, hereinafter, I will name the title "Artificial Key VS. Natural Key".


Natural key believers like natural keys in terms of the beauty of relational models and the pursuit of data design.

This trend is common among engineers who grew up with DBA and good old design method.

Meanwhile, the artificial key faction tends to favor artificial keys from aspects such as framework regulation, reduction of SQL bugs and simplicity of relations.

This trend is common among programmers and engineers who grew up with recent speed design.

There are reasons why I chose the words "believer" and "faction" in the above, but I will explain in detail later.

In the RDBMS design, "Artificial Key VS. Natural Key" has both merits and demerits in both cases.

If you are a top engineer, you must clearly understand that the criteria for choosing designs must be based on the objectives and priorities of the project.

If you are suffering from the problem of this discussion, the solution is simple.

The only thing we should do is to investigate the merits and demerits and judge it according to the situation of the project.

That's it.

We should seek both opinions and know the experience for the purpose of the project.

Therefore, in all situations, there is never a fact that either one is absolutely correct.

If we misunderstand that just the correctness of both opinions is the purpose, the problem of this discussion of the project will probably not be solved forever.

If we discuss at a level other than the purpose of the project, this sort of discussion will quickly evolve into a controversy due to the personal aspect.

If we do not have the purpose consciousness of the project, we will judge with a more subjective impression.

Why is this because, in each premise, each is correct.

For this reason, I used the words "believer" and "faction" as above.

Therefore, the only solution to this discussion is to match the members' sense of purpose in the project.

In other words, matching a purpose consciousness means that we need "ability to see the essence" and "organization development capability".