2018-11-29

Distributed Fault Tolerant Cache System using GlusterFS & tmpfs

> to Japanese Pages

1. Summary

In this post I would like to introduce a distributed fault tolerant memory cache system using GlusterFS and tmpfs.

2. Introduction

In past postings, I introduced the use case of file system using GlusterFS as the theme of distributed fault tolerant system. In this post I would like to introduce a distributed fault tolerant memory cache system using GlusterFS and tmpfs. For the meaning of each keyword, please refer to the following. * Distributed Fault Tolerant Computer System * GlusterFS * tmpfs * Fault Tolerant * Cache Memory

3. Environment

* CentOS-7
* GlusterFS-4.1.5
* tmpfs

4. Architecture

5. Cache Servers Configuration

5-1. Install GlusterFS

# Both Cache Servers 1 and 2
$ sudo yum -y install centos-release-gluster
$ sudo yum -y install glusterfs-server

5-2. Startup GlusterFS

# Both Cache Servers 1 and 2
$ sudo systemctl start glusterd
$ sudo systemctl enable glusterd
$ sudo systemctl status glusterd

5-3. Set GlusterFS server hosts

# Both Cache Servers 1 and 2
$ sudo vim /etc/hosts
10.0.0.1 cache1.example.com
10.0.0.2 cache1.example.com

5-4. Create GlusterFS storage pool

# Only Cache Server 1
$ sudo gluster peer probe cache2.example.com

5-5. Confirm GlusterFS storage pool

# Both Cache Servers 1 and 2
$ sudo gluster peer status

5-6. Set tmpfs

# Both Cache Servers 1 and 2
$ sudo mkdir / /cache_server
$ sudo mount -t tmpfs -o size=512m tmpfs /cache_server
$ sudo vim /etc/fstab

5-7. Set to fstab for tmpfs

# Cache Server 1
$ sudo vim /etc/fstab
tmpfs    /cache_server    tmpfs    defaults,size=512m    0 0
# Cache Server 2
$ sudo vim /etc/fstab
tmpfs    /cache_server    tmpfs    defaults,size=512m    0 0

5-8. Create GlusterFS volume

# Only Cache Server 1
$ sudo gluster volume create server replica 2 cache1.example.com:/cache_server/ cache2.example.com:/cache_server/ force

5-9. Confirm GlusterFS volume information

# Both Cache Servers 1 and 2
$ sudo gluster volume info

5-10. Start GlusterFS volume

# Only Cache Server 1
$ sudo gluster volume start server

5-11. Confirm GlusterFS volume status

# Both Cache Servers 1 and 2
$ sudo gluster volume status

6. Cache Client Configuration

6-1. Install GlusterFS clients

# Both Web Servers 1 and 2
$ sudo yum -y install glusterfs glusterfs-fuse glusterfs-rdma

6-2. Set GlusterFS server hosts

# Both Web Servers 1 and 2
$ sudo vim /etc/hosts
10.0.0.1 cache1.example.com
10.0.0.2 cache2.example.com

6-3. Mount GlusterFS clients to GlusterFS servers

# Web Server 1
$ sudo mkdir /cache_client
$ sudo mount -t glusterfs cache1.example.com:/cache_server /cache_client
$ sudo df -Th
# Web Server 2
$ sudo mkdir /cache_client
$ sudo mount -t glusterfs cache2.example.com:/cache_server /cache_client
$ sudo df -Th

6-4. set fstab for GlusterFS auto mount

# Web Server 1
$ sudo vim /etc/fstab
cache1.example.com:/cache_server       /cache_client   glusterfs       defaults,_netdev        0 0
# Web Server 2
$ sudo vim /etc/fstab
cache2.example.com:/cache_server       /cache_client   glusterfs       defaults,_netdev        0 0

6-5. Test GlusterFS replication

# Web Server 1
$ sudo touch /cache_client/test.txt
$ sudo ls /cache_client
# Web Server 2
$ sudo ls /cache_client
$ sudo rm /cache_client/text.txt
# Web Server 1
$ sudo ls /cache_client

7. Benchmark Test

The results of the benchmark test are reference values. The following test program is written in golang.

7-1. Program Flow

1 MB Text
↓
# Cache System using GlusterFS and tmpfs
Repeat File Creating, Writing, Reading and Removing 1,000 Times
↓
# File System using GlusterFS and xfs
Repeat File Creating, Writing, Reading and Removing 1,000 Times
↓
Average Value of 10 Times Benchmark Test

7-2. Golang Program

# Web Server 1
package main

import (
 "fmt"
 "io/ioutil"
 "os"
 "time"
)

func main() {
 // Configure
 file_paths := []string {"/cache_client/test.txt", "/file_client/test.txt"}
 systems := []string {"Cache System", "File System"}
 results := []float64 {0, 0}
 benchmark_times := 10
 processing_times := 1000

 var content_string string
 for i := 0; i < 1000000; i++ {
  content_string += "a"
 }
 content_byte := []byte(content_string)

 for i := 0; i < benchmark_times; i++ {
  for j, _ := range file_paths {
   // Get processing start datetime
   start_datetime := time.Now()
   for k := 0; k < processing_times; k++ {
    // Write file
    err := ioutil.WriteFile(file_paths[j], content_byte, 0644)
    if err != nil {
     fmt.Println("File Writing Error: %s\n", err)
     os.Exit(1)
    }

    // Read file
    content_read, err := ioutil.ReadFile(file_paths[j])
    if err != nil {
     fmt.Println("File Reading Error: %s%s\n", err, content_read)
     os.Exit(1)
    }

    // Remove file
    err = os.Remove(file_paths[j])
    if err != nil {
     fmt.Println("File Removing Error: %s\n", err)
     os.Exit(1)
    }
   }
   // Get processing end datetime
   end_datetime := time.Now()

   // Get processing total time
   total_time := end_datetime.Sub(start_datetime)
   results[j] += total_time.Seconds()
   fmt.Printf("[%v] %v: %v\n", i, systems[j], total_time)
  }
 }

 for i, v := range results {
  average := v / benchmark_times
  fmt.Printf("%v Average: %vs\n", systems[i], average)
 }

 os.Exit(0)
}

7-3. Run Golang Program

# Web Server 1
$ go build main.go
$ ./main

7-4. Results

[0] Cache System: 16.180571409s
[0] File System: 16.302403193s
[1] Cache System: 15.93305082s
[1] File System: 16.61177919s
[2] Cache System: 16.311321483s
[2] File System: 16.393385347s
[3] Cache System: 16.036057793s
[3] File System: 16.740742882s
[4] Cache System: 16.139074157s
[4] File System: 16.754381782s
[5] Cache System: 16.151769414s
[5] File System: 16.90680323s
[6] Cache System: 16.340969528s
[6] File System: 16.693090068s
[7] Cache System: 16.177776325s
[7] File System: 16.961861504s
[8] Cache System: 16.226036092s
[8] File System: 16.638383153s
[9] Cache System: 16.622041061s
[9] File System: 16.887159942s
Cache System Average: 16.2618668082s
File System Average: 16.638999029100003s

8. Conclusion

In this way, the distributed fault tolerant cache system could be constructed with GlusterFS + tmpfs. Although the benchmark test is a reference value, we can see that the performance improves like this. The next theme of the distributed fault-tolerant system is “LizardFS” which is the same distributed fault-tolerant file system as GlusterFS. I thank Mark Mulrainey for LizardFS Inc. who gave me direct advice.

2018-11-10

Off-JT for Top Engineer's Way - #0. Introduction

> for Japanese Pages

1. abount “Off-JT for Top Engineer's Way”

“How can I become a top engineer?” I have opportunities to receive such questions such as IT lectures and seminars. (The definition of “top” is not mentioned here.) When I answer such questions to audiences, I have limited time to question and answer, so I will only tell the audience only the most symply things. However, there are various tricks for that in fact. For example, when I work on OJT as a trainer, I teach various things according to the situation. However, I can't do OJT with all the engineers seeking such answers. So I will make this article "Off-JT for Top Engineer's Way" series so that I can help you a little, I will tell you the tricks.

2. Target of “Off-JT for Top Engineer's Way”

* People aspiring to a engineer * Beginner and intermediate engineers * Growth suffering engineers * Engineers aspiring to a technical manager or a director * Engineers aspiring to CTO or CIO * etc...

3. The First Theme of “Off-JT for Top Engineer's Way”

The first theme of the “Off-JT for Top Engineer's Way” series is “# 1.Technical Memo” scheduled. * Off-JT for Top Engineer's Way #0. Introduction * Off-JT for Top Engineer's Way #1. Technical Memo

2018-05-20

Story of A NoSQL “IfDB” Project Suspended in 2004

> for Japanese Pages

Story of A NoSQL “IfDB” Project Suspended in 2004

1. A story over 14 years ago

The keyword NoSQL was first proposed in 1998. However, it did not penetrate at all. And in early 2009, NoSQL gradually began to penetrate by re-proposing a person. The other day, I was sorting out my own source code, I stayed in my eyes in the source code of the last update November 2004. The source code is a NoSQL scratch source code called “Document Oriented DB” in recent years. NoSQL's first proposal was in 1998. We were developing from 2003 to 2004. NoSQL began to penetrate the world in 2009. The year of NoSQL's popularization is much later. In short, it is a project more than 5 years ago from the re-proposal of NoSQL. That is a story of over 15 years ago. What I thought about this source code again is that “such technical innovation approach is not wrong.” And one technology I developed at that time is also useful for my AI research now. I want to say “Don't give up” to such innovative technicians. And I would like to thank the engineers that developed NoSQL.

2. About NoSQL Project IfDB

Well, let me briefly outline this project. This project name has been changed several times as follows.
FileDB -> IniDB -> IfDB
At that time, we were not conscious of the word NoSQL at all. Rather, we probably didn't know NoSQL. The CRUD processing of data is flow as follows.
Array -> Query Command -> Ini File -> Array of Result Set
We also implemented SQL-like syntaxes. And what we emphasized was to enable direct editing of Ini File rather than via commands. This is because at that time JSON was not generic. These are still useful depending on applications but I have been busy with other projects and forgotten over the last ten years. However, I have memories that the development speed and the improvement cycle have increased very much using these method.

3. The reason why I developed Ifdb

The reason why I developed this technology 14 years ago. I saw this source code after a long time, but the concept is not much different from the current document-oriented DB. 14 years ago, I was troublesome that relational model design but I had doubts about waterfall-like data model design and schema restrictions. This is because the scalability of the project is compromised by staticizing the relational model and schema. At that time, all the DBA's arranged their mouths and said: “It's because the design of the data model is bad.” “Study the data model more.” However, the more I learned data models, the more my discomfort grew. I think that I probably wanted to do agile development at that time. At that time I probably did not even know about agile development. The initial proposal time for agile development is also in the 1990's.

4. Conclusion

The de facto standard of technology is a very important factor in business. However, the number of engineers who do not think about the simple thing “why the de facto standard is important” has increased. They arrange their mouths and make excuses like this: “Not a de facto standard = evil” “We don't need scratch development.” You don't forget what you are developing in scratch development products of me or someone. Both Sustaining Innovation and Disruptive Innovation are important. It is always proved by the times.

2018-05-11

Operational Notifications Exception Design (ONED) for Web Applications by DevOps

> to Japanese Pages

1. Abstract

With recent intensification of speed competition of IT service development, in this post I will explain the new exception design (ONED) approach of the Web application based on DevOps aiming at improving the improvement cycle of development speed, quality and opportunity loss rate.

2. Representation specific to ONED

# ONED Define “Operational Notifications Exception Design” as ONED. # Event Exception Class Define event-based exception classes such as SyntaxException and ArgumentException as event exception classes. # Exception Trigger Define throw syntax of exception as exception trigger. # Exception Instance If only an instance of an exception class, define it as an exception instance. # Exception Captor Define catch, rescue, except syntax of exception as exception captor.

3. Scope of this Article

The scope of this article is listed below.
  • Since this article is intended to explain the concept of ONED, it does not mention the exception mechanism.
  • To avoid misunderstanding, this article is limited for web applications.
  • This article excludes general purpose libraries, general purpose tools and local applications from subjects.
  • Because the syntax may differ depending on the development language, you should replace it as appropriate as your development language.
  • This article does not mention Checked Exception or Unchecked Exception.
  • In this article, I write only the main points because I do not intend to write 8 pages as in academic papers.
  • We all want to think about the meaning of the word of Exception occasionally, but let's give it up as each language specification.

4. Introduction

What are the functional requirements you will undertake at the beginning of the web application implementation? I will undertake exception design and the implementation. This is because projects whose exception design and implementation are delayed will be forced to refactor all source code in the latter part of the implementation and will have an adverse effect on everything. If exception design and implementation are completed, I can think that 50% of Web application implementation was completed. Therefore, the most important reason is to raise improvement cycle of development speed, quality and opportunity loss. This is because it is indispensable to improve development speed and quality control in order to survive in the market as the competition of development speed of IT services in recent years intensifies. In recent years, the concept of various service development has spread from marketing methods such as Scrum development, DevOps, Growth Hack/Marketing, Lean Development, and DCPA (Not PDCA) to management methods. The common thing to these is the objective of promoting service growth and improvement cycle. Optimization of exception design is also an important factor that can contribute to these improvements if these are the objectives. So, what should we do with exception design for these purposes? In this post, I would like to touch on this theme from ONED's approach. By the way, when I lecture exception design to members, I often receive a question from members. “I don't know the difference between Exception and Error.” This is a very important question in exception design. For this question, see the previous article: Exception and Error. The most important point in exception design is that you can properly organize errors and exceptions within yourself.

5. Recommendation

5-1. Elements of Exception Class

The elements of the recommended exception class by ONED are as follows. (You should replace the syntax according to the development language.)
# Example.1:
InternalCriticalException("messages: HTTP_STATUS_CODE")

# Example.2:
FooException("InternalCritical: messages: HTTP_STATUS_CODE")
I explain the above points and usefulness in order. For convenience of explanation, I will explain it using Example 1.

5-2. the Point of Responsibility Division

# Example.1:
InternalCriticalException("messages: HTTP_STATUS_CODE")
The prefix of the exception class name in Example 1 means the point of responsibility division. In ONED, examples of recommended division of responsibility points are as follows. # Internal Internal System: Responsibility of The Exception Occurrence Host For example, in this case Web server. # External External System: Responsibility of External System For example, external systems such as Google APIs etc. # Medial Medial System: Responsibility of Owned Resources For example, DB servers viewed from a web server etc. # Terminal Client System: Responsibility of Client System For example, web clients, API clients and user devices etc. In this way, the responsibility division point clarifies the occurrence responsibility of the exception. Why do we need to clarify the responsibility for exception occurrence? In the first place, clarifying the responsibility of occurrence of system failure is the basis of initial response. This is because clarifying the occurrence responsibility in the initial response improves the recovery speed. And ONED is based on the idea that emphasizes the initial response of operations recovery. If you are a top engineer with rich application development/operations experience, you will understand these usefulness only with this explanation. In short, it is important for developers to code with operations awareness. However, the above actions are all cases of high importance exceptions. In next section, I explain the importance lavel of exception.

5-3. Exception Level (Importance)

# Example.1:
InternalCriticalException("messages: HTTP_STATUS_CODE")
The second word above represents the error level. Normally, error level such as logging often indicates importance, but it is the same in ONED. However, I occasionally receive the following questions: “Which level of importance should I use?” “Which exceptions are important and which exceptions are unimportant?” “In order to master these, I need a lot of experience like you.” It's not like that. Let's think about the service operations again. For example, suppose you are implementing an exception code like "FileNotFound". In that case, if this exception occured after service release, should you receive emergency mail as the development manager? If it is a minor exception that can continue processing by automatic recovery, such emergency mail notifications are only noisy. If you e-mailed every exception, we will lose sight of important notifications or no one will care about the notification. The importance level of ONED is only the level of the definition of automatic processing that must be executed during operations. In other words, when that exception occured, what do you want to execute handle with error handler during service operations? Do you want logging? Do you want display the error message to end users? Do you want continue processing? Do you want receive the emergency mail? Do you want receive the emergency tel? Just thinking about these things. If it was excessive notifications, you just adjust it after release. Exception Level = Importance = Processing Content at Operations See the table below: Exception Level Example:
Level Log Display Mail Tel Abort
Info o x x x x
Notice o o x x x
Warning o o x x o
Pratial o o o x Partial
Error o o o x x
Critical o o o o o
Fatal o o o o Shutdown
In this way, the importance and the processing content are merely linked. Also, the exception level should not be superficial and should be practical for service operations. Therefore, we should be able to change the above setting for each service. In short, the viewpoint of operations is important for exception coding.

5-4. Pros and Cons of Emergency Level

However, we sholud be careful. That is, exception level of ONED must be importance level and must not be emergency level. This is because ONED thinks that urgency at the time of exception of Web application should be overall judgement. The overall judgment is a comprehensive judgment of the service operations status, the resource situation, the extent of influence to the user, the consistency of the data, and the like. This is because even under the same exception, depending on these circumstances, judgment may be given not to respond urgently. And because the defined word "emergency" at the development phase becomes meaningless at the time of operations recovery. Urgency should be judged in the operations phase. Emergency level is used even for the log level recommended by the Web-only language like PSR-3, and descriptions related to urgency are arranged side by side. Defining such urgency only in the development stage is evidence that it is development driven and it is also the cause of vertical development.

5-5. HTTP Status Code

In the case of exception handling of a Web application, the trigger should convey the HTTP status code so that the exception captor can properly return the HTTP status code to the HTTP client.
# PHP
throw new Exception('message', 500);

# Ruby
raise Exception.new("messages: code")
This is because the top-level captor can not accurately determine which status code should be returned to HTTP clients. This is particularly important for Web API servers that have to handle with status codes sensitive.

5-6. Continuation and Interruption Processing of Exception

ONED is not concept like “exception = interruption” I feel that the stereotype like the one above is often found in web application engineers. Even for Web applications, continuation of exceptions is a very important factor. So, in next section, I will explain the way of thinking of Exception Continuation Processing and examples of the implementation.

5-7. Implementation Examples of Continuation Processing

Continuation processing of ONED is to execute necessary recovery processing and notification processing after exception occurrence and return to normal processing, and there is no matter how to realize it. Examples of implementation is shown below. # Retry Mechanism How to retry an exception block after continuation processing of exception using a retry mechanism like Ruby or Python package. # Goto Mechanism A method of returning processing up to a label declared near an exception trigger after continuation processing of an exception using a goto mechanism like C family, PHP, golang, Python package. # Exception Block A way to make use of exception blocks variously for recovery/continuation processing. (In the case of Java, checked exception) # Wrapping Recovery/Continuation in Exception Class How to wrap recovery/continuation processing within an exception class. The misunderstanding "Instance and Throw" by some engineers will be described later.
# Exceptions are captured at the top level.
if (! is_file("/dir/file.txt")) {
    // Instance
    $FileNotFound = new FileNotFoundException("Could not find the file.", 500);
    // Recovery Processing
    if (! $FileNotFound->makeFile("/dir/file.txt")) {
        // Interruption Processing
        throw $FileNotFound;
    }
    // Continuation Processing
    $FileNotFound->log("Notice");
    $FileNotFound->mail("Notice");
}
// Standard Processing
# Wrapping Continuation/Interruption in Exception Class How to wrap continuation/interruption processing within an exception class.
# Exceptions are captured at the top level.
if (! is_file("/dir/file.txt")) {
    // try Recovery Processing
    if (! touch("/dir/file.txt")) {
        // Interruption Processing
        new InternalCriticalException("Could not create the file.", 500);
    }
    // Continuation Processing
    new InternalNoticeException("Could not find the file.", 500);
}
// Standard Processing
class InternalCriticalException extends X
{
    const LOG = true;
    const DISPALY = true;
    const MAIL = true;
    const ABORT = true;
    const TEL = true;
    const SHUTDONW = false;
}
class X extends Exception
{
    public function __construct()
    {
        parent::__construct();
        //...
    }

    protected function _log()
    {
        //...
    }

    protected function _display()
    {
        //...
    }

    protected function _mail()
    {
        //...
    }

    protected function _tel()
    {
        //...
    }

    protected function _abort()
    {
        throw $this;
    }

    protected function _shutdown()
    {
        //...
    }
}

5-7. Last Captor

In Web application development by ONED, we think that you should control all exceptions of unexpected exceptions and expected exceptions by using top level captor.

6. Pros and Cons of Exception Handling

6-1. Misunderstanding of The Throw Keyword

throw ≠ instance
Very rarely, there are programmers who have a strange misunderstanding about the “throw” keyword of exception. For example, a subclass that inherits a throwable class is a very strange misunderstanding that you must throw at the same time as the class instance. This is typical of catalog engineers who do not understand exceptions at all. Do you know the meaning of the catalog engineer? It is a negative evaluation that was frequently used in automobile manufacturing industry like Toyota. If there are such constraints, we are not allowed to re-throw an instanced object? The “throw” keyword is simply a syntax that throws the object, its timing is arbitrary. Also, it is arbitrary whether to throw the object.
throw new FooException("messages");
Such a syntax just throws an object at the same time as an instance. There are no such restrictions in C languages, Python, Java, Ruby, Perl, PHP, Javascript and any other languages. Such stubborn justification is very nonsense.

6-2. Non-Local Exits

Non-Local Exsits ≠ Error Destoroy
I sometimes see a negative article as below. “The coding as below is bad.”:
// Java Example:
try {
    // processing
    // ...
} catch (FooException e) {}
Certainly it is the worst code. If it is the purpose of destroying exceptions and errors it is guilty. We should feedback if he forgets to code. If he is inexperienced in coding for non-local exits, we should tell him the best way by development language. The one that should really be improved is the unfounded stereotype that "catch block is empty = evil".

6-3. Event Exception Class

FileNotFoundException
ONED defines exception classes such as FileNotFoundException and AugmentException as event exception classes. What we have to note is that there are Exception to be Continued and Exception to be Suspended even with the same Exception of FileNotFound. We also have to be careful that the class name perspective is development driven. Personally, I recommend naming the operations viewpoint.

7. Conclusion

In this way, ONED is an exception design based on DevOps. In fact, for all Web services that introduced ONED, the development speed, quality improvement and service growth cycle dramatically improved. This article is a post that modified my research paper for practical use.

2018-04-30

Exception and Error

> to Japanese Pages

Exception and Error

1. Summary

One day, I was explaining the error architecture of a application in one project. However, before I explain the design, I had to explain “definition of Exception and Error” to project members. In this blog, I post an outline of the explanation at that time. FYI.

2. Introduction

Can you easily explain the difference between Exception and Error? Just the other day in a project, I received a question “I don't know the difference between Exception and Error.”. This question is very numerous. In the past, there were also programmers complaining about uninterrupted exception handling. In addition, there were some engineers that thought “Unexpected Error = Error” and “Expected Error = Exception.” Furthermore, there were some engineers that thought “Purpose Achieved = Exception” and “Purpose Un-achieved = Error.” The answer to this question “What is the difference between Exception and Error?” depends on which field you are talking about now. I will explain it now.

3. Exception and Error

First of all, you have to know that there is no concept that the difference between Exception and Error universally holds. There are many engineers who do not understand this. Therefore, a misunderstanding like the foreword occurs. In other words, it is impossible to clearly define the difference between an error and an exception as a universal concept. This is because the definition of "exception and error" differs depending on the field to be handled due to the historical background. Conversely, if it is not universal, its field-specific definition may exist. For example, Java is much clearer than other languages, but the difference with C++ is large. If you are discussing only specific field exceptions and error specifications, you should follow the specifications of that field. If not defined? The fact that it is not defined means that it is decided not to define or there is room for interpretation. About the above "pros and cons for non-interrupting exception handling", in Ruby, there is a retry mechanism in the exception mechanism. This means returning during exception processing. For example, in Python, the retry package is distributed. In Java and C++, the retry mechanism is not officially supported. However, I know the Java project that implements the retry mechanism on its own. Even in PHP, if you follow the constraints of the goto mechanism, return of exception handling can be implemented. Thus, the specification of "exception and error" in a particular language does not necessarily apply to other fields. In an undefined field, it is very nonsense that the concept of a specific field is enforced. If so, what should we think about in a wider range of system designs? It is good if you define it as necessary. The same is true for the error design of the application at the summary.

4. Conclusion

The difference between exceptions and errors, there is no universally established concept. The definition of exceptions and errors depends on the definition of each field. If it is defined, it suffices to follow its definition. If not defined it is best to interpret or define.

2018-03-11

Space Camel Comment Style & Layer Comment Style

> for Japanese Pages

Space Camel Comment Style & Layer Comment Style

1. Summary

In this post I propose Space Camel Comment Style and Layer Comment Style as a way of thinking of programming comment style. FYI.

2. Introduction

Comment statements of programming source code is one of the important elements in a scalable system. If there are no comment standards in the project, the comment style varies according to programmers. Also, I often hear the following voice from many programmers. “I do not know what to write in comments.” “I do not know the granularity of comments.” In this post I propose Space Camel Comment Style and Layer Comment Style as a way of thinking of programming comment style. I will not mention any comments, FooDoc, annotations or other comment styles other than the logic part here. Also, I will not mention about the presence or absence of comments by elegant code.

3. Cooment Style

3-1. Space Camel Comment Style
Because I could not find a general definition, I would define it as “Space Camel Comment Style” here. This is a camel type comment style like an object-oriented method name. It is a form that puts a space in words of the method name. for Example:
# set Controllers and View Resources
Reason why this is useful. In programming and the IT industry, it is not unusual to omit English "articles" in naming. (I do not mention here about countable nouns, uncountable nouns, plurals.) Regarding memory reservation, although it is already talk of the past, English grammar strict to naming greatly deteriorates readability. Also, there are many cases not related to the context before and after. Top programmers accustomed to simple naming to improve readability may not feel discomfort even with this format. The trick is to think about the process flow in terms of behavior. It is similar to method approach. As in the case that you mothodize the logic, the scope of the comment is also determined by the range of processing to make one behavior. You just write a comment as if you naming the method for the behavior. This will clarify the granularity and style of your comment. Also, since it is not affected by reusability or conflict like method name, you should be able to write much more flexibly than method name. However, this is not the case if you have to explain a lot in comments. In that case, for example I might do as follows:
# Notice: description...
3-2. Layer Comment Style
Because I could not find a general definition, I would define it as “Layer Comment Style” here. In the source code, even if indentation of the same layer, the layer of processing may be different. In such a case, a comment style conscious of the layer of processing like markdown head is useful. for Example:
# set Controllers and View Resources
processing...
## for Controllers
processing...
## for View
processing...
Thus, by making the comments multi layered, you may be able to increase the readability of the layer of behavior.

4. Conclusion

In this post, I proposed two programming comment styles, Space Camel Comment Style and Layer Comment Style. For Space Camel Comment Style, I think that it is applicable to commitment sentences. Whether your programming orientation is object-oriented or procedural, low-layer processing has to be written somewhere. These comment styles would be particularly useful for describing such low layer processing.

2017-12-17

Distributed Parallel Fault Tolerant File System with GlusterFS

> to Japanese Pages

1. Summary

In this post, I explained “GlusterFS” as one of the source code synchronization solutions between web servers in a clustered environment. If we use this solution, the difference in source code due to deployment time lag will not occur between web servers. In addition, since “GlusterFS” is a distributed parallel fault tolerant file system, the effective range of this solution is not limited to Web servers. Depending on how you use it, you can build any fault tolerant file system on a large system.

2. GlusgerFS Introduction

Are you using “rsync” or “lsyncd” for synchronzing the file system between each node in a business cluster environment? To make the story clearer, I will explain web servers as an example, but this issue is not limited to web servers. There are several ways to synchronize project source code between web servers in a cluster environment. First, let's give some Bad know-how. For example, I often hear how to synchronize to each node using a shell script with “rsync” implemented. Even if you manually deploy to each node, the problem will be few if the system is small. However, even if synchronization is automated using “cron” with the shortest period, source code differences will occur for up to one minute. In addition, I sometimes hear how to automatically detect source code changes using “lsyncd” and synchronize incrementally to each node. However, this synchronization method may take several tens of seconds at the shortest before synchronization is completed. Furthermore, these synchronization methods are unidirectional synchronization, so there are no guarantee of data consistency. I also hear pretty much how to automatically deploy to each node using ci tools. However, these synchronization methods only fill the time difference between manual and automatic, which is not a fundamental solution. If these synchronization processes are performed to each node by serial processing, there will be a time difference of “number of nodes x time difference” until synchronization is completed. It would be better to do it at least by parallel processing. If these statuses are not a problem in UX, data management and other aspects, this post will be useless. If there is a problem, there are a number of these solutions. As one of its solutions, you have a way to use “GlusterFS.” GlusterFS is a distributed parallel fault tolerant file system. One of the advantages of using GlusterFS is that fault-tolerant design can be realized, such as file system distribution, synchronization, capacity increase/decrease can be realized with no system stop. Naturally, synchronization is bidirectional and there is no concept of master and slave. However, you should not include files in this volume what will continue to be locked by the daemon. If you do not make a mistake in usage, GlusterFS will do a great deal of power. In this post, I will explain how to implement GlusterFS. In this post, I will not announce actual measurements on sync speed, so you should implement and judge.

3. GlusterFS Architecture

The following figure is a rough concept of GlusterFS.
In addition, the following figure is a structure example of this post.
It does not prepare volume server cluster, it is a simple self-contained structure. The Web server itself is a volume server and a client, and it is a mechanism that mounts from the client and connects to its own volume. Naturally, it is possible to change the system configuration by increasing/decreasing the brick.

4. GlusterFS Environment

CentOS-7 GlusterFS 3.12

5. GlusterFS Servers Configuration

5-1. Install GlusterFS servers

# Both Web Server 1 and 2
$ sudo yum -y install centos-release-gluster
$ sudo yum -y install glusterfs-server

5-2. Startup GlusterFS servers

# Both Web Server 1 and 2
$ sudo systemctl start glusterd
$ sudo systemctl enable glusterd
$ sudo systemctl status glusterd

5-3. Set GlusgerFS hosts name

# Both Web Server 1 and 2
$ sudo vim /etc/hosts
10.0.0.1 web1.example.com
10.0.0.2 web2.example.com

5-4. Create GlusgerFS storage pool

# Only Web Server 1
$ sudo gluster peer probe web2.example.com

5-5. Confirm GlusgerFS storage pool

# Both Web Server 1 and 2
$ gluster peer status

5-6. Create GlusterFS volume

# Only Web Server 1
$ sudo gluster volume create server replica 2 web1.example.com:/server/ web2.example.com:/server/ force

5-7. Confirm GlusgerFS volume information

# Both Web Server 1 and 2
$ sudo gluster volume info

5-8. Start GlusgerFS volume

# Only Web Server 1
$ sudo gluster volume start server

5-9. Conform GlusgerFS volume status

# Both Web Server 1 and 2
$ sudo gluster volume status

6. GlusterFS Clients Configuration

6-1. Install GlusgerFS Clients

# Both Web Server 1 and 2
$ sudo yum -y install glusterfs glusterfs-fuse glusterfs-rdma

6-2. Mount Client to Server

# Web Server 1
$ sudo mkdir /client
$ sudo mount -t glusterfs web1.example.com:/server /client
$ sudo df -Th
# Web Server 2
$ sudo mkdir /client
$ sudo mount -t glusterfs web2.example.com:/server /client
$ sudo df -Th

6-3. Auto mount GlusgerFS Server

# Web Server 1
$ sudo vim /etc/fstab
web1.example.com:/server       /client   glusterfs       defaults,_netdev        0 0
# Web Server 2
$ sudo vim /etc/fstab
web2.example.com:/server       /client   glusterfs       defaults,_netdev        0 0
o

6-4. Test GlusgerFS replication

# Web Server 1
$ sudo cd /client
$ sudo touch test.txt
$ sudo ls
# Web Server 2
$ sudo cd /client
$ sudo ls
$ sudo rm text.txt
# Web Server 1
$ sudo ls

7. GlusgerFS Conclusion

In this post, I explained “GlusterFS” as one of the source code synchronization solutions between web servers in a clustered environment. If you use this solution, the difference in source code due to deployment time lag will not occur between web servers. In this way, once we have the foundation of the system, we will not have to use the CI tools desperately. In addition, since GlusterFS is a distributed parallel fault tolerant file system, the effective range of this solution is not limited to Web servers. Depending on how you use it, you can build any fault tolerant file system on a large system.

2017-12-13

Seconds Access Limiter for Web API with Python, PHP, Ruby, and Perl

> to Japanese Pages

1. Summary

In this article, I will describe the access limitation solution that is often required in Web APIs. In addition, I will exemplify “One-Second Access Limiter” which is one of access limit solutions using sample codes of Python, PHP, Ruby and Perl interpreter languages.

2. Introduction

In the Web API service development project, we may be presented with requirements such as “access limitation within a certain period.” For example, the requirement is such that the Web API returns the HTTP status code of “429 Too Many Requests” when the number of accesses is exceeded. These designers and developers will be forced to improve the speed and reducing the load of this process. This is because if the resource load reduction is the purpose of access limitation, it is meaningless if the logic is increasing the load. In addition, when the reference time is short and the accuracy of the result is required, the accuracy of the algorithm is required. If you are an engineer with the experience of developing Web Application Firewall (WAF), you should already know these things. In the world, there are many access limitation solutions, but in this post I will provide a sample of “One-Second Access Limiter” as one of its solutions.

3. Requirements

"Access limitation up to N times per second" 1. If the access exceeds N times per second, return the HTTP status code of "429 Too Many Requests" and block accesses. 2. However, the numerical value assigned to “N” depends on the specification of the project. 3. Because of the nature of access control for 1 second, this processing should not be a bottleneck of access processing capability.

4. Key Points of Architectures

Even from the above requirements, it must be processed as fast and light as possible.

# Prohibition of Use of Web Application Framework

Even if you are using a lightweight framework, loading a framework takes a lot of load. Therefore, this process should be implemented “before processing into the framework.”

# Libraries Loading

In order to minimize the load due to library loading, it should focus on built-in processing.

# Exception/Error Handling

Increasing the load by relying on the framework for exceptions and error handling makes no sense. These should be implemented simply in low-level code.

# Data Resource Selection

It is better to avoid heavyweight data resources like RDBMS, but in this requirement "Eventual Consistency" is not a good idea. Realizing with Loadbalancer or Reverse Proxy is also one solution, but the more the application layer is handled, the more the processing cost of the whole communication is incurred. Semi-synchronization such as memory cache and lightweight NoSQL is one option, but in this paper I use file system as data resource. In order to prevent wait processing such as file locking, it is controlled by the file name and the number of files. However, in the case of a cluster environment, a data synchronization solution is necessary.

5. Environments

The OS of sample codes is Linux. I prepared Python, PHP, Ruby, Perl as sample code languages. # "Python-3" Sample Code # "PHP-5" Sample Code # "Ruby-2" Sample Code # "Perl-5" Sample Code

6. "Python" Sample Code

Seconds Access Limiter with Python. Version: Python-3
#!/usr/bin/python
# coding:utf-8

import time
import datetime
import cgi
import os
from pathlib import Path
import re
import sys
import inspect
import traceback
import json

# Definition
def limitSecondsAccess():
    try:
        # Init
        ## Access Timestamp Build
        sec_usec_timestamp = time.time()
        sec_timestamp = int(sec_usec_timestamp)

        ## Access Limit Default Value
        ### Depends on Specifications: For Example 10
        access_limit = 10

        ## Roots Build
        ### Depends on Environment: For Example '/tmp'
        tmp_root = '/tmp'
        access_root = os.path.join(tmp_root, 'access')

        ## Auth Key
        ### Depends on Specifications: For Example 'app_id'
        auth_key = 'app_id'

        ## Response Content-Type
        ### Depends on Specifications: For Example JSON and UTF-8
        response_content_type = 'Content-Type: application/json; charset=utf-8'

        ### Response Bodies Build
        ### Depends on Design
        response_bodies = {}

        # Authorized Key Check
        query = cgi.FieldStorage()
        auth_id = query.getvalue(auth_key)
        if not auth_id:
            raise Exception('Unauthorized', 401)
    
        # The Auth Root Build
        auth_root = os.path.join(access_root, auth_id)

        # The Auth Root Check
        if not os.path.isdir(auth_root):
            # The Auth Root Creation
            os.makedirs(auth_root, exist_ok=True)

        # A Access File Creation Using Micro Timestamp
        ## For example, other data resources such as memory cache or RDB transaction.
        ## In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
        ## However, in the case of a cluster configuration, file system synchronization is required.
        access_file_path = os.path.join(auth_root, str(sec_usec_timestamp))
        path = Path(access_file_path)
        path.touch()

        # The Access Counts Check
        access_counts = 0
        for base_name in os.listdir(auth_root):
            ## A Access File Path Build
            file_path = os.path.join(auth_root, base_name)

            ## Not File Type
            if not os.path.isfile(file_path):
                continue

            ## The Base Name Data Type Casting
            base_name_sec_usec_timestamp = float(base_name)
            base_name_sec_timestamp = int(base_name_sec_usec_timestamp)

            ## Same Seconds Stampstamp
            if sec_timestamp == base_name_sec_timestamp:

                ### A Overtaken Processing
                if sec_usec_timestamp < base_name_sec_usec_timestamp:
                    continue

                ### Access Counts Increment
                access_counts += 1

                ### Too Many Requests
                if access_counts > access_limit:
                    raise Exception('Too Many Requests', 429)

                continue

            ## Past Access Files Garbage Collection
            if sec_timestamp > base_name_sec_timestamp:
                os.remove(file_path)

    except Exception as e:
        # Exception Tuple to HTTP Status Code
        http_status = e.args[0]
        http_code = e.args[1]

        # 4xx
        if http_code >= 400 and http_code <= 499:
            # logging
            ## snip...
        # 5xx
        elif http_code >= 500:
            # logging
            # snip...

            ## The Exception Message to HTTP Status
            http_status = 'foo'
        else:
            # Logging
            ## snip...

            # HTTP Status Code for The Response
            http_status = 'Internal Server Error'
            http_code = 500

        # Response Headers Feed
        print('Status: ' + str(http_code) + ' ' + http_status)
        print(response_content_type + "\n\n")

        # A Response Body Build
        response_bodies['message'] = http_status
        response_body = json.dumps(response_bodies)

        # The Response Body Feed
        print(response_body)

# Excecution
limitSecondsAccess()

7. "PHP" Sample Code

Seconds Access Limiter with PHP Version: PHP-5
<?php
# Definition
function limitSecondsAccess()
{
    try {
        # Init
        ## Access Timestamp Build
        $sec_usec_timestamp = microtime(true);
        list($sec_timestamp, $usec_timestamp) = explode('.', $sec_usec_timestamp);

        ## Access Limit Default Value
        ### Depends on Specifications: For Example 10
        $access_limit = 10;

        ## Roots Build
        ### Depends on Environment: For Example '/tmp'
        $tmp_root = '/tmp';
        $access_root = $tmp_root . '/access';

        ## Auth Key
        ### Depends on Specifications: For Example 'app_id'
        $auth_key = 'app_id';

        ## Response Content-Type
        ## Depends on Specifications: For Example JSON and UTF-8
        $response_content_type = 'Content-Type: application/json; charset=utf-8';

        ## Response Bodies Build
        ### Depends on Design
        $response_bodies = array();

        # Authorized Key Check
        if (empty($_REQUEST[$auth_key])) {
            throw new Exception('Unauthorized', 401);
        }
        $auth_id = $_REQUEST[$auth_key];

        # The Auth Root Build
        $auth_root = $access_root . '/' . $auth_id;

        # The Auth Root Check
        if (! is_dir($auth_root)) {
            ## The Auth Root Creation
            if (! mkdir($auth_root, 0775, true)) {
                throw new Exception('Could not create the auth root. ' . $auth_root, 500);
            }
        }

        # A Access File Creation Using Micro Timestamp
        /* For example, other data resources such as memory cache or RDB transaction.
         * In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
         * However, in the case of a cluster configuration, file system synchronization is required.
         */
        $access_file_path = $auth_root . '/' . strval($sec_usec_timestamp);
        if (! touch($access_file_path)) {
            throw new Exception('Could not create the access file. ' . $access_file_path, 500);
        }

        # The Auth Root Scanning
        if (! $base_names = scandir($auth_root)) {
            throw new Exception('Could not scan the auth root. ' . $auth_root, 500);
        }

        # The Access Counts Check
        $access_counts = 0;
        foreach ($base_names as $base_name) {
            ## A current or parent dir
            if ($base_name === '.' || $base_name === '..') {
                continue;
            }

            ## A Access File Path Build
            $file_path = $auth_root . '/' . $base_name;

            ## Not File Type
            if (! is_file($file_path)) {
                continue;
            }

            ## The Base Name to Integer Data Type
            $base_name_sec_timestamp = intval($base_name);

            ## Same Seconds Timestamp
            if ($sec_timestamp === $base_name_sec_timestamp) {
            
                ## The Base Name to Float Data Type
                $base_name_sec_usec_timestamp = floatval($base_name);

                ### A Overtaken Processing
                if ($sec_usec_timestamp < $base_name_sec_usec_timestamp) {
                    continue;
                }

                ### Access Counts Increment
                $access_counts++;

                ### Too Many Requests
                if ($access_counts > $access_limit) {
                    throw new Exception('Too Many Requests', 429);
                }

                continue;
            }

            ## Past Access Files Garbage Collection
            if ($sec_timestamp > $base_name_sec_timestamp) {
                @unlink($file_path);
            }
        }
    } catch (Exception $e) {
        # The Exception to HTTP Status Code
        $http_code = $e->getCode();
        $http_status = $e->getMessage();

        # 4xx
        if ($http_code >= 400 && $http_code <= 499) {
            # logging
            ## snip...
        # 5xx
        } else if ($http_code >= 500) {
            # logging
            ## snip...

            # The Exception Message to HTTP Status
            $http_status = 'foo';
        # Others
        } else {
            # Logging
            ## snip...

            # HTTP Status Code for The Response
            $http_status = 'Internal Server Error';
            $http_code = 500;
        }

        # Response Headers Feed
        header('HTTP/1.1 ' . $http_code . ' ' . $http_status);
        header($response_content_type);

        # A Response Body Build
        $response_bodies['message'] = $http_status;
        $response_body = json_encode($response_bodies);
        
        # The Response Body Feed
        exit($response_body);
    }
}

# Execution
limitSecondsAccess();
?>

8. "Ruby" Sample Code

Seconds Access Limiter with Ruby Version: Ruby-2
# Definition#!/usr/bin/ruby
# -*- coding: utf-8 -*-

require 'time'
require 'fileutils'
require 'cgi'
require 'json'

def limitScondsAccess

    begin
        # Init
        ## Access Timestamp Build
        time = Time.now
        sec_timestamp = time.to_i
        sec_usec_timestamp_string = "%10.6f" % time.to_f
        sec_usec_timestamp = sec_usec_timestamp_string.to_f

        ## Access Limit Default Value
        ### Depends on Specifications: For Example 10
        access_limit = 10

        ## Roots Build
        ### Depends on Environment: For Example '/tmp'
        tmp_root = '/tmp'
        access_root = tmp_root + '/access'

        ## Auth Key
        ### Depends on Specifications: For Example 'app_id'
        auth_key = 'app_id'

        ## Response Content-Type
        ### Depends on Specifications: For Example JSON and UTF-8
        response_content_type = 'application/json'
        response_charset = 'utf-8'

        ## Response Bodies Build
        ### Depends on Design
        response_bodies = {}

        # Authorized Key Check
        cgi = CGI.new
        if ! cgi.has_key?(auth_key) then
            raise 'Unauthorized:401'
        end
        auth_id = cgi[auth_key]

        # The Auth Root Build
        auth_root = access_root + '/' + auth_id

        # The Auth Root Check
        if ! FileTest::directory?(auth_root) then
            # The Auth Root Creation
            if ! FileUtils.mkdir_p(auth_root, :mode => 0775) then
                raise 'Could not create the auth root. ' + auth_root + ':500'
            end
        end

        # A Access File Creation Using Micro Timestamp
        ## For example, other data resources such as memory cache or RDB transaction.
        ## In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
        ## However, in the case of a cluster configuration, file system synchronization is required.
        access_file_path = auth_root + '/' + sec_usec_timestamp.to_s
        if ! FileUtils::touch(access_file_path) then
            raise 'Could not create the access file. ' + access_file_path + ':500'
        end

        # The Access Counts Check
        access_counts = 0
        Dir.glob(auth_root + '/*') do |access_file_path|

            # Not File Type
            if ! FileTest::file?(access_file_path) then
                next
            end

            # The File Path to The Base Name
            base_name = File.basename(access_file_path)

            # The Base Name to Integer Data Type
            base_name_sec_timestamp = base_name.to_i

            # Same Seconds Timestamp
            if sec_timestamp == base_name_sec_timestamp then

                ### The Base Name to Float Data Type
                base_name_sec_usec_timestamp = base_name.to_f

                ### A Overtaken Processing
                if sec_usec_timestamp < base_name_sec_usec_timestamp then
                    next
                end

                ### Access Counts Increment
                access_counts += 1

                ### Too Many Requests
                if access_counts > access_limit then
                    raise 'Too Many Requests:429'
                end

                next
            end

            # Past Access Files Garbage Collection
            if sec_timestamp > base_name_sec_timestamp then
                File.unlink access_file_path
            end
        end

        # The Response Feed
        cgi.out({
            ## Response Headers Feed
            'type' => 'text/html',
            'charset' => response_charset,
        }) {
            ## The Response Body Feed
            ''
        }

    rescue => e
        # Exception to HTTP Status Code
        messages = e.message.split(':')
        http_status = messages[0]
        http_code = messages[1]

        # 4xx
        if http_code >= '400' && http_code <= '499' then
            # logging
            ## snip...
        # 5xx
        elsif http_code >= '500' then
            # logging
            ## snip...

            # The Exception Message to HTTP Status
            http_status = 'foo'
        else
            # Logging
            ## snip...

            # HTTP Status Code for The Response
            http_status = 'Internal Server Error'
            http_code = '500'
        end

        # The Response Body Build
        response_bodies['message'] = http_status
        response_body = JSON.generate(response_bodies)

        # The Response Feed
        cgi.out({
            ## Response Headers Feed
            'status' => http_code + ' ' + http_status,
            'type' => response_content_type,
            'charset' => response_charset,
        }) {
            ## The Response Body Feed
            response_body
        }
    end
end

limitScondsAccess

9. "Perl" Sample Code

Seconds Access Limiter with Perl Version: Perl-5
#!/usr/bin/perl

use strict;
use warnings;
use utf8;
use Time::HiRes qw(gettimeofday);
use CGI;
use File::Basename;
use JSON;

# Definition
sub limitSecondsAccess {

    eval {
        # Init
        ## Access Timestamp Build
        my ($sec_timestamp, $usec_timestamp) = gettimeofday();
        my $sec_usec_timestamp = ($sec_timestamp . '.' . $usec_timestamp) + 0;

        ## Access Limit Default Value
        ### Depends on Specifications: For Example 10
        my $access_limit = 10;

        ## Roots Build
        ### Depends on Environment: For Example '/tmp'
        my $tmp_root = '/tmp';
        my $access_root = $tmp_root . '/access';

        ## Auth Key
        ### Depends on Specifications: For Example 'app_id'
        my $auth_key = 'app_id';

        ## Response Content-Type
        ## Depends on Specifications: For Example JSON and UTF-8

        ## Response Bodies Build
        ### Depends on Design
        my %response_bodies;

        # Authorized Key Check
        my $CGI = new CGI;
        if (! defined($CGI->param($auth_key))) {
            die('Unauthorized`401`');
        }
        my $auth_id = $CGI->param($auth_key);

        # The Auth Root Build
        my $auth_root = $access_root . '/' . $auth_id;

        # The Access Root Check
        if (! -d $access_root) {
            ## The Access Root Creation
            if (! mkdir($access_root)) {
                die('Could not create the access root. ' . $access_root . '`500`');
            }
        }

        # The Auth Root Check
        if (! -d $auth_root) {
            ## The Auth Root Creation
            if (! mkdir($auth_root)) {
                die('Could not create the auth root. ' . $auth_root . '`500`');
            }
        }

        # A Access File Creation Using Micro Timestamp
        ## For example, other data resources such as memory cache or RDB transaction.
        ## In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
        ## However, in the case of a cluster configuration, file system synchronization is required.
        my $access_file_path = $auth_root . '/' . $sec_usec_timestamp;
        if (! open(FH, '>', $access_file_path)) {
            close FH;
            die('Could not create the access file. ' . $access_file_path . '`500`');
        }
        close FH;

        # The Auth Root Scanning
        my @file_pathes = glob($auth_root . "/*");
        if (! @file_pathes) {
            die('Could not scan the auth root. ' . $auth_root . '`500`');
        }

        # The Access Counts Check
        my $access_counts = 0;
        foreach my $file_path (@file_pathes) {

            ## Not File Type
            if (! -f $file_path) {
                next;
            }

            ## The Base Name Extract
            my $base_name = basename($file_path);

            ## The Base Name to Integer Data Type
            my $base_name_sec_timestamp = int($base_name);

            ## Same Seconds Timestamp
            if ($sec_timestamp eq $base_name_sec_timestamp) {
            
                ## The Base Name to Float Data Type
                my $base_name_sec_usec_timestamp = $base_name;

                ### A Overtaken Processing
                if ($sec_usec_timestamp lt $base_name_sec_usec_timestamp) {
                    next;
                }

                ### Access Counts Increment
                $access_counts++;

                ### Too Many Requests
                if ($access_counts > $access_limit) {
                    die("Too Many Requests`429`");
                }

                next;
            }

            ## Past Access Files Garbage Collection
            if ($sec_timestamp gt $base_name_sec_timestamp) {
                unlink($file_path);
            }
        }
    };

    if ($@) {
        # Error Elements Extract
        my @e = split(/`/, $@);

        # Exception to HTTP Status Code
        my $http_status = $e[0];
        my $http_code = '0';
        if (defined($e[1])) {
            $http_code = $e[1];
        }

        # 4xx
        if ($http_code ge '400' && $http_code le '499') {
            # logging
            ## snip...
        # 5xx
        } elsif ($http_code ge '500') {
            # logging
            ## snip...

            ## The Exception Message to HTTP Status
            $http_status = 'foo';
        # Others
        } else {
            # logging
            ## snip...

            $http_status = 'Internal Server Error';
            $http_code = '500';
        }

        # Response Headers Feed
        print("Status: " . $http_code . " " . $http_status . "\n");
        print('Content-Type: application/json; charset=utf-8' . "\n\n");

        # A Response Body Build
        my %response_bodies;
        $response_bodies{'message'} = $http_status;
        $a = \%response_bodies;
        my $response_body = encode_json($a);

        # The Response Body Feed
        print($response_body);
    }

}

# #Excecution
&limitSecondsAccess();

10. Conclusion

In this post, I exemplified a sample of “One-Second Access limiter” solution using Python, PHP, Ruby and Perl interpreter languages. Because of the nature of “access control for one second”, it will be understood that low load, high speed processing and data consistency are required. Therefore, although there are some important points, they are as described in the architecture section. In this post, I showed a solution using file name and file number of file system. However, in a clustered environment, it is unsuitable for this architecture if the selected data synchronization solution is slow. In such cases, the asynchronous data architecture may be one of the options rather. In such a case, control is made on a per-node basis. Furthermore, the importance of the load balancing threshold is increased, and the precision of the access limitation and consistency of the result must be abandoned. However, unless precision of access limitation and consistency of results are required, it is also one.

2017-11-19

Load Balancer with “LVS + Keepalived + DSR”

> to Japanese Pages

1. Summary

In this post, I will explain the effectiveness of the load balancer solution by “LVS + Keepalived + DSR” design technology and explain how to build it.

2. Introduction

The load balancer solution by “LVS + Keepalived + DSR” is a mature technology but I have posted this solution because I was asked by my friends. For highly scalable projects, the topic of the load balancer is an agenda at least once in the system performance meeting. I have done a lot of such experiences. And we will have the opportunity to hear negative opinions about the performance of the software load balancer. In such a case, the name of a hardware load balancer like BIG-IP sometimes comes up to the topic of that agenda. However, we can not miss the fact that a load balancer using “LVS + Keepalived + DSR” design technology runs at 100% SLA and 10% load factor in our project receiving 1 million accesses per day. This demonstrates that this design technology is one of the effective load balancer solutions in cloud hosting without load balancer PaaS or on premises. Such a result is brought about by using the communication method called Direct Server Return (DSR). The dramatic load reduction of the load balancer is realized by the feature of “returning it directly to the client without going through communication from the lower node” of the DSR. In addition, this solution is not affected by various hardware related problems (failure, deterioration, support contract, support quality, end of product support, etc.). In this post, I will explain how to build “LVS + Keepalived + DSR” design. In addition, in this post, I will not specifically conduct benchmarks such as “DSR VS. Not DSR”.

3. Environment

In this post, I will explain the solution based on the following assumptions.
CentOS 7
Keepalived
ipvsadm
Firewalld
In this post, I will explain the solution based on the following system configuration diagram.

4. Install

First, we install the “Keeplived” on the Load Balancer 1.
$ sudo yum -y install keepalived
Next, we install the “Keeplived” on the Load Balancer 2.
$ sudo yum -y install keepalived
Next, we install the “ipvsadm” on the Load Balancer 1.
$ sudo yum -y install ipvsadm
Next, we install the “ipvsadm” on the Load Balancer 2.
$ sudo yum -y install ipvsadm

5. Configuration

Next, we configure the “firewalld” on the Web Server 1. We startup the “firewalld” and enable it.
$ sudo systemctl start firewalld
$ sudo systemctl enable firewalld
$ sudo systemctl status firewalld
We configure the “firewalld.”
$ sudo firewall-cmd --set-default-zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=80/tcp --zone=internal
$ sudo firewall-cmd --add-port=80/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=443/tcp --zone=internal
$ sudo firewall-cmd --add-port=443/tcp --zone=internal --permanent
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.3 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.3 -j REDIRECT
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
We reload the “firewalld” and confirm the configuration.
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all-zone
$ sudo firewall-cmd --direct --get-rule ipv4 nat PREROUTING
We use the “telnet” command to verify the communication of the Web Server 1.
$ sudo telnet 10.0.0.3 80
Next, we configure the “firewalld” on the Web Server 2. We startup the “firewalld” and enable it.
$ sudo systemctl start firewalld
$ sudo systemctl enable firewalld
$ sudo systemctl status firewalld
We configure the “firewalld.”
$ sudo firewall-cmd --set-default-zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal
$ sudo firewall-cmd --add-port=22/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=80/tcp --zone=internal
$ sudo firewall-cmd --add-port=80/tcp --zone=internal --permanent
$ sudo firewall-cmd --add-port=443/tcp --zone=internal
$ sudo firewall-cmd --add-port=443/tcp --zone=internal --permanent
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.4 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.4 -j REDIRECT
$ sudo firewall-cmd --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d 10.0.0.5 -j REDIRECT
We reload the “firewalld” and confirm the configuration.
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all-zone
$ sudo firewall-cmd --direct --get-rule ipv4 nat PREROUTING
We use the “telnet” command to verify the communication of the Web Server 2.
$ sudo telnet 10.0.0.4 80
Next, we configure the “Keepalived” on the Load Balancer 1.
$ sudo cp -a /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.org
$ sudo vim /etc/keepalived/keepalived.conf
; Common Configuration Block
global_defs {
    notification_email {
        alert@example.com
    }
    notification_email_from lb1@example.com
    smtp_server mail.example.com
    smtp_connect_timeout 30
    router_id lb1.example.com
}

; Master Configureation Block
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 1
    priority 101
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass foo
    }
    virtual_ipaddress {
        10.0.0.5/24 dev eth0
    }
}

; Virtual Server Configureation Block
virtusl_server 10.0.0.5 80 {
    delay_loop 6
    lvs_sched rr
    lvs_method DR
    persistence_timeout 50
    protocol TCP
    sorry_server 10.0.0.254 80
    real_server 10.0.0.3 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 10.0.0.4 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
$ sudo systemctl start keepalived
In case of failback prohibition, you should disable automatic startup of “Keepalived”.
$ :sudo systemctl enable keepalived
$ sudo systemctl status keepalived
$ sudo ip addr
Next, we configure the “Keepalived” on the Load Balancer 2.
$ sudo cp -a /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.org
$ sudo vim /etc/keepalived/keepalived.conf
; Common Configuration Block
global_defs {
    notification_email {
        admin@example.com
    }
    notification_email_from lb2@example.com
    smtp_server mail.example.com
    smtp_connect_timeout 30
    router_id lb2.example.com
}

; Backup Configureation Block
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 1
    priority 100
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass foo
    }
    virtual_ipaddress {
        10.0.0.5/24 dev eth0
    }
}

; Virtual Server Configureation Block
virtusl_server 10.0.0.5 80 {
    delay_loop 6
    lvs_sched rr
    lvs_method DR
    persistence_timeout 50
    protocol TCP
    sorry_server 10.0.0.254 80
    real_server 10.0.0.3 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 10.0.0.4 80 {
        weight 1
        inhibit_on_failure
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
$ sudo systemctl start keepalived
In case of failback prohibition, you should disable automatic startup of “Keepalived”.
$ :sudo systemctl enable keepalived
$ sudo systemctl status keepalived
$ sudo ip addr
Next, we change the kernel parameters on the Load Balancer 1.
$ sudo vim /etc/sysctl.conf
# Enable Packet Transfer between Interfaces
net.ipv4.ip_forward = 1

# Do not discard packets from networks that do not belong to the interface.
net.ipv4.conf.all.rp_filter = 0
We reflect the setting of the kernel parameters.
$ sudo sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
We startup the “ipvsadm.”
$ sudo touch /etc/sysconfig/ipvsadm
$ sudo systemctl start ipvsadm
In case of failback prohibition, you should disable automatic startup of “ipvsadm”.
$ :sudo systemctl enable ipvsadm
$ sudo systemctl status ipvsadm
Next, we change the kernel parameters on the Load Balancer 2.
$ sudo vim /etc/sysctl.conf
# Enable Packet Transfer between Interfaces
net.ipv4.ip_forward = 1

# Do not discard packets from networks that do not belong to the interface.
net.ipv4.conf.all.rp_filter = 0
We reflect the setting of the kernel parameters.
$ sudo sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
We startup the “ipvsadm.”
$ sudo touch /etc/sysconfig/ipvsadm
$ sudo systemctl start ipvsadm
In case of failback prohibition, you should disable automatic startup of “ipvsadm”.
$ :sudo systemctl enable ipvsadm
$ sudo systemctl status ipvsadm
We will use the “ipvsadm” command to check the LVS communication settings on the Load Balancer 1.
$ sudo ipvsadm -Ln
We will use the “ipvsadm” command to check the LVS communication settings on the Load Balancer 2.
$ sudo ipvsadm -Ln

6. Conclusion

In this way, we can improve performance degradation against high load, which is a weak point of software load balancer, with the DSR technology.

2017-11-04

Surrogate Key VS. Natural Key

Surrogate Key VS. Natural Key

The other day, I discussed a "Surrogate Key VS. Natural Key" in a development project.

I sometimes come across such discussions.

This will be a brush up of my post in the past but I will post the best solution for this problem.

Furthermore, this is not only the case of the title of this post but also the basic way of thinking and solution of problems for such type of discussion.

If you are suffering about this matter in the design of the RDBMS. For your information.


If we want to solve the problem of this discussion, we must first change the recognition of the surrogate key to a artificial key before you get into the main theme.

First of all, we have to solve from the misunderstanding of "Surrogate Key VS. Natural Key" controversy contagious in the world.

The true meaning of this discussion should be "Artificial Key VS. Natural Key".

A natural key is a primary key designed by a single entity attribute or a combination of a plurality of entity attributes as you know.

A surrogate key is a primary key designed as a substitute for a natural key when it is difficult to design a natural key.

An artificial key is a primary key designed to increment an integer value mechanically, irrespective of the natural key design.

Therefore, even natural key believers, if it is difficult to design a natural key, they use the surrogate key as a matter of course.

However, it can be said that the artificial key faction does not use the natural key almost.

From the above, the misunderstanding of the "Surrogate Key VS. Natural Key" controversy would have been solved.

If you try to advance the discussion while misunderstanding this, there is a possibility that the argument may go off, so it would be better to first be aware of the misunderstanding.

Therefore, hereinafter, I will name the title "Artificial Key VS. Natural Key".


Natural key believers like natural keys in terms of the beauty of relational models and the pursuit of data design.

This trend is common among engineers who grew up with DBA and good old design method.

Meanwhile, the artificial key faction tends to favor artificial keys from aspects such as framework regulation, reduction of SQL bugs and simplicity of relations.

This trend is common among programmers and engineers who grew up with recent speed design.

There are reasons why I chose the words "believer" and "faction" in the above, but I will explain in detail later.

In the RDBMS design, "Artificial Key VS. Natural Key" has both merits and demerits in both cases.

If you are a top engineer, you must clearly understand that the criteria for choosing designs must be based on the objectives and priorities of the project.

If you are suffering from the problem of this discussion, the solution is simple.

The only thing we should do is to investigate the merits and demerits and judge it according to the situation of the project.

That's it.

We should seek both opinions and know the experience for the purpose of the project.

Therefore, in all situations, there is never a fact that either one is absolutely correct.

If we misunderstand that just the correctness of both opinions is the purpose, the problem of this discussion of the project will probably not be solved forever.

If we discuss at a level other than the purpose of the project, this sort of discussion will quickly evolve into a controversy due to the personal aspect.

If we do not have the purpose consciousness of the project, we will judge with a more subjective impression.

Why is this because, in each premise, each is correct.

For this reason, I used the words "believer" and "faction" as above.

Therefore, the only solution to this discussion is to match the members' sense of purpose in the project.

In other words, matching a purpose consciousness means that we need "ability to see the essence" and "organization development capability".

2017-10-07

Integration of PHP Error Handling


Integration of PHP Error Handling

Last week, on how to integrate PHP 5 error handling, I was asked to explain at multiple development projects.

As used herein, "Integration of PHP Error Handling" means, for example, the following development requirements.
# Normal Error
# Exception
# PHP Core Error
↓
We want to integrate and manage error handling such as logs and mails when these occur.
In order to realize the above, you need to understand the habit and mechanism of PHP error.
Here, we describe these realization methods.


Notice

# In this article, we will focus on these implementation methods.
# In this article, we do not mention general concepts such as "difference between error and exception".
# In this article, we describe it only for suspend processing.
# In this article, we do not mention restoration processing, trigger, processing level etc.
# This article applies only to PHP 5. It does not apply to PHP 7.


Flow

One way to realize this "PHP error integration processing" is the flow of error processing as follows.
Normal Error Processing  ←  Normal Error
↓
Exception Processing  ←  Exception
↓
Shutdown Processing  ←  PHP Core Error


Normal Error Handling

First, take handling authority of PHP Normal Error or User-Defined Error from PHP.
In PHP, the following functions are prepared.
mixed set_error_handler(callable $error_handler [, int $error_types = E_ALL | E_STRICT])
In order to take processing authority from PHP, create a callback function and register it in this function.
(Be sure to register it as early as possible of a series of request processing.)
Within the callback function, take the normal error and rethrow as an exception.
In short, the goal is to take the normal error handling authority from PHP, and pass it to the exception.
However, in this callback function, it is one point that PHP core error can not be taken.
public function handleError()
{
    //* Error Handler Definition
    function handleError($_number, $_message, $_file, $_line, $_contexts)
    {
         //* Not Includ Error Reporting
         if (! (error_reporting() & $_number)) {
             return;
         }
        //* to ErrorException
        throw new ErrorException($_message, 500, $_number, $_file, $_line);
    }

    //* Error Handler Set
    set_error_handler('handleError');
}


Exception Handling

Next, take exception processing authority which was not caught from PHP.
In PHP, the following functions are prepared.
callable set_exception_handler(callable $exception_handler)
In order to take processing authority from PHP, create a callback function and register it in this function.
(Be sure to register it as early as possible of a series of request processing.)
As a result, all normal errors and all uncaught exceptions are aggregated in one place.
But this is not enough.
We have not taken PHP Core Error yet.
Therefore, processing logic is not placed here.
public function handleException()
{
    //* Exception Handler Definition
    function handleException($_e)
    {
        //* Exception Context
         $_SERVER['X_EXCEPTION_HANDLER_CONTEXT'] = $_e;

         //* Error Processing to Shutdown Logic
         exit;
    }

    //* Exception Handler Set
    set_exception_handler('handleException');
}


PHP Core Error Handling

In PHP 5, set_error_handler () can not take the processing authority of core error issued by PHP.
PHP 5 does not throw an exception of core error.
Therefore, in order to capture the PHP core error, the following function is used.
void register_shutdown_function(callable $callback [, mixed $parameter [, mixed $... ]])
This function makes it possible to register a callback function to be executed when script processing is completed or when exit () is called.
Utilizing this property, it is possible to integrate all processing such as error, exception, PHP core error, etc. as a result.
public function handleShutdown($_error_mails = array())
{
    //* Shutdown Function Definition
    function handleShutdown($_error_numbers = array(), $_error_mails = array(), $_http_status_codes = array())
    {
        //* Exception or Error
        if (! empty($_SERVER['X_EXCEPTION_HANDLER_CONTEXT'])) {
            $e = $_SERVER['X_EXCEPTION_HANDLER_CONTEXT'];
            unset($_SERVER['X_EXCEPTION_HANDLER_CONTEXT']);
            $message = $e->__toString();
            $code = $e->getCode();
        } else {
            $e = error_get_last();
            //* Normal Exit
            if (empty($e)) {
                return;
            }

            //* Core Error
            $message = $_error_numbers[$e['type']] . ': ' . $e['message'] . ' in ' . $e['file'] . ' on line ' . $e['line'];
            $code = 500;
        }

        //* Error Logging
        error_log($message, 4);

        //* Error Mail
        $cmd = 'echo "' . $message . '" | mail -S "smtp=smtp://' . $_error_mails['host'] . '" -r "' . $_error_mails['from'] . '" -s "' . $_error_mails['subject'] . '" ' . $_error_mails['to'];
        $outputs = array();
        $status = null;
        $last_line = exec($cmd, $outputs, $status);

        //* HTTP Status Code
        header('HTTP/1.1 ' . $code . ' ' . $_http_status_codes[$code]);

        //* Shutdown
        exit($code . ' ' . $_http_status_codes[$code]);
    }

    //* Shutdown Function Registration
    $error_numbers = self::$error_numbers;
    $http_status_codes = self::$http_status_codes;
    register_shutdown_function('handleShutdown', $error_numbers, $_error_mails, $http_status_codes);
}


to Class Library

When these are made into a general purpose class library, it becomes as follows.
Logging, e-mail, exception context delivery, etc. should be changed according to circumstances.
class AppE
{

    public static $error_numbers = array(
        1 => 'Fatal',
        2 => 'Warning',
        4 => 'Parse Error',
        8 => 'Notice',
        16 => 'Core Fatal',
        32 => 'Core Warning',
        64 => 'Compile Error',
        128 => 'Compile Warning',
        256 => 'Ex Error',
        512 => 'Ex Warning',
        1024 => 'Ex Notice',
        2048 => 'Strict Error',
        4096 => 'Recoverable Error',
        8192 => 'Deprecated',
        16384 => 'Ex Deprecated',
        32767 => 'All',
    );

    //* HTTP Status Code
    public static $http_status_codes = array(
        'default' => 200,
        100 => 'Continue',
        101 => 'Switching Protocols',
        102 => 'Processing',
        200 => 'OK',
        201 => 'Created',
        202 => 'Accepted',
        203 => 'Non-Authoritative Information',
        204 => 'No Content',
        205 => 'Reset Content',
        206 => 'Partial Content',
        207 => 'Multi-Status',
        226 => 'IM Used',
        300 => 'Multiple Choices',
        301 => 'Moved Permanently',
        302 => 'Found',
        303 => 'See Other',
        304 => 'Not Modified',
        305 => 'Use Proxy',
        307 => 'Temporary Redirect',
        400 => 'Bad Request',
        401 => 'Unauthorized',
        402 => 'Payment Required',
        403 => 'Forbidden',
       404 => 'Not Found',
       405 => 'Method Not Allowed',
       406 => 'Not Acceptable',
       407 => 'Proxy Authentication Required',
       408 => 'Request Timeout',
       409 => 'Conflict',
       410 => 'Gone',
       411 => 'Length Required',
       412 => 'Precondition Failed',
       413 => 'Request Entity Too Large',
       414 => 'Request-URI Too Long',
       415 => 'Unsupported Media Type',
       416 => 'Requested Range Not Satisfiable',
       417 => 'Expectation Failed',
       418 => "I'm a teapot",
       422 => 'Unprocessable Entity',
       423 => 'Locked',
       424 => 'Failed Dependency',
       426 => 'Upgrade Required',
       500 => 'Internal Server Error',
       501 => 'Not Implemented',
       502 => 'Bad Gateway',
       503 => 'Service Unavailable',
       504 => 'Gateway Timeout',
       505 => 'HTTP Version Not Supported',
       506 => 'Variant Also Negotiates',
       507 => 'Insufficient Storage',
       509 => 'Bandwidth Limit Exceeded',
       510 => 'Not Extended',
    );


    public function __construct()
    {}


    public function handleError()
    {
        //* Error Handler Definition
        function handleError($_number, $_message, $_file, $_line, $_contexts)
        {
             //* Not Includ Error Reporting
             if (! (error_reporting() & $_number)) {
                 return;
             }
            //* to ErrorException
            throw new ErrorException($_message, 500, $_number, $_file, $_line);
        }

        //* Error Handler Set
        set_error_handler('handleError');
    }


    public function handleException()
    {
        //* Exception Handler Definition
        function handleException($_e)
        {
            //* Exception Context
             $_SERVER['X_EXCEPTION_HANDLER_CONTEXT'] = $_e;

             //* Error Processing to Shutdown Logic
             exit;
        }

        //* Exception Handler Set
        set_exception_handler('handleException');
    }


    public function handleShutdown($_error_mails = array())
    {
        //* Shutdown Function Definition
        function handleShutdown($_error_numbers = array(), $_error_mails = array(), $_http_status_codes = array())
        {
            //* Exception or Error
            if (! empty($_SERVER['X_EXCEPTION_HANDLER_CONTEXT'])) {
                $e = $_SERVER['X_EXCEPTION_HANDLER_CONTEXT'];
                unset($_SERVER['X_EXCEPTION_HANDLER_CONTEXT']);
                $message = $e->__toString();
                $code = $e->getCode();
            } else {
                $e = error_get_last();
                //* Normal Exit
                if (empty($e)) {
                    return;
                }

                //* Core Error
                $message = $_error_numbers[$e['type']] . ': ' . $e['message'] . ' in ' . $e['file'] . ' on line ' . $e['line'];
                $code = 500;
            }

            //* Error Logging
            error_log($message, 4);

            //* Error Mail
            $cmd = 'echo "' . $message . '" | mail -S "smtp=smtp://' . $_error_mails['host'] . '" -r "' . $_error_mails['from'] . '" -s "' . $_error_mails['subject'] . '" ' . $_error_mails['to'];
            $outputs = array();
            $status = null;
            $last_line = exec($cmd, $outputs, $status);

            //* HTTP Status Code
            header('HTTP/1.1 ' . $code . ' ' . $_http_status_codes[$code]);

            //* Shutdown
            exit($code . ' ' . $_http_status_codes[$code]);
        }

        //* Shutdown Function Registration
        $error_numbers = self::$error_numbers;
        $http_status_codes = self::$http_status_codes;
        register_shutdown_function('handleShutdown', $error_numbers, $_error_mails, $http_status_codes);
    }

}


Afterword

Many PHP frameworks provide interfaces for extended error handlers and extended exception handlers, but their contents merely implement similar things.
Recently, when the story of this layer came up, the number of engineers who understood these technologies has become really few.
This is what has been said from around the time the non-Java web application framework came out.
I realize that it has become a reality at many development sites.
A lot of recent engineers use the framework, that is not a means, it has become a purpose.
It can be seen from the fact that the number of engineers who do not know the basis of Web application development has increased.
In this way, the factor that the number of tool engineers has increased.
One of them is the increase in speed development method like Silicon Valley.
It is by no means negative.
In short, if we categorize the category of development engineers more subdivided, the misrecognition of mutual recognition will probably decrease.