> to Japanese Pages1. Summary
In this post, I explained “GlusterFS” as one of the source code synchronization solutions between web servers in a clustered environment. If we use this solution, the difference in source code due to deployment time lag will not occur between web servers. In addition, since “GlusterFS” is a distributed parallel fault tolerant file system, the effective range of this solution is not limited to Web servers. Depending on how you use it, you can build any fault tolerant file system on a large system.2. GlusgerFS Introduction
Are you using “rsync” or “lsyncd” for synchronzing the file system between each node in a business cluster environment? To make the story clearer, I will explain web servers as an example, but this issue is not limited to web servers. There are several ways to synchronize project source code between web servers in a cluster environment. First, let's give some Bad know-how. For example, I often hear how to synchronize to each node using a shell script with “rsync” implemented. Even if you manually deploy to each node, the problem will be few if the system is small. However, even if synchronization is automated using “cron” with the shortest period, source code differences will occur for up to one minute. In addition, I sometimes hear how to automatically detect source code changes using “lsyncd” and synchronize incrementally to each node. However, this synchronization method may take several tens of seconds at the shortest before synchronization is completed. Furthermore, these synchronization methods are unidirectional synchronization, so there are no guarantee of data consistency. I also hear pretty much how to automatically deploy to each node using ci tools. However, these synchronization methods only fill the time difference between manual and automatic, which is not a fundamental solution. If these synchronization processes are performed to each node by serial processing, there will be a time difference of “number of nodes x time difference” until synchronization is completed. It would be better to do it at least by parallel processing. If these statuses are not a problem in UX, data management and other aspects, this post will be useless. If there is a problem, there are a number of these solutions. As one of its solutions, you have a way to use “GlusterFS.” GlusterFS is a distributed parallel fault tolerant file system. One of the advantages of using GlusterFS is that fault-tolerant design can be realized, such as file system distribution, synchronization, capacity increase/decrease can be realized with no system stop. Naturally, synchronization is bidirectional and there is no concept of master and slave. However, you should not include files in this volume what will continue to be locked by the daemon. If you do not make a mistake in usage, GlusterFS will do a great deal of power. In this post, I will explain how to implement GlusterFS. In this post, I will not announce actual measurements on sync speed, so you should implement and judge.3. GlusterFS Architecture
The following figure is a rough concept of GlusterFS. In addition, the following figure is a structure example of this post. It does not prepare volume server cluster, it is a simple self-contained structure. The Web server itself is a volume server and a client, and it is a mechanism that mounts from the client and connects to its own volume. Naturally, it is possible to change the system configuration by increasing/decreasing the brick.4. GlusterFS Environment
CentOS-7 GlusterFS 3.125. GlusterFS Servers Configuration
5-1. Install GlusterFS servers
# Both Web Server 1 and 2$ sudo yum -y install centos-release-gluster $ sudo yum -y install glusterfs-server5-2. Startup GlusterFS servers
# Both Web Server 1 and 2$ sudo systemctl start glusterd $ sudo systemctl enable glusterd $ sudo systemctl status glusterd5-3. Set GlusgerFS hosts name
# Both Web Server 1 and 2$ sudo vim /etc/hosts
- 10.0.0.1 web1.example.com
- 10.0.0.2 web2.example.com
5-4. Create GlusgerFS storage pool
# Only Web Server 1$ sudo gluster peer probe web2.example.com5-5. Confirm GlusgerFS storage pool
# Both Web Server 1 and 2$ gluster peer status5-6. Create GlusterFS volume
# Only Web Server 1$ sudo gluster volume create server replica 2 web1.example.com:/server/ web2.example.com:/server/ force5-7. Confirm GlusgerFS volume information
# Both Web Server 1 and 2$ sudo gluster volume info5-8. Start GlusgerFS volume
# Only Web Server 1$ sudo gluster volume start server5-9. Conform GlusgerFS volume status
# Both Web Server 1 and 2$ sudo gluster volume status6. GlusterFS Clients Configuration
6-1. Install GlusgerFS Clients
# Both Web Server 1 and 2$ sudo yum -y install glusterfs glusterfs-fuse glusterfs-rdma6-2. Mount Client to Server
# Web Server 1$ sudo mkdir /client $ sudo mount -t glusterfs web1.example.com:/server /client $ sudo df -Th# Web Server 2$ sudo mkdir /client $ sudo mount -t glusterfs web2.example.com:/server /client $ sudo df -Th6-3. Auto mount GlusgerFS Server
# Web Server 1$ sudo vim /etc/fstab# Web Server 2
- web1.example.com:/server /client glusterfs defaults,_netdev 0 0
$ sudo vim /etc/fstabo
- web2.example.com:/server /client glusterfs defaults,_netdev 0 0
6-4. Test GlusgerFS replication
# Web Server 1$ sudo cd /client $ sudo touch test.txt $ sudo ls# Web Server 2$ sudo cd /client $ sudo ls $ sudo rm text.txt# Web Server 1$ sudo ls7. GlusgerFS Conclusion
In this post, I explained “GlusterFS” as one of the source code synchronization solutions between web servers in a clustered environment. If you use this solution, the difference in source code due to deployment time lag will not occur between web servers. In this way, once we have the foundation of the system, we will not have to use the CI tools desperately. In addition, since GlusterFS is a distributed parallel fault tolerant file system, the effective range of this solution is not limited to Web servers. Depending on how you use it, you can build any fault tolerant file system on a large system.
Infrastructure, Network, Database, System Architecture, RDBMS, NoSQL, KVS, Web API, AI, AR, IoT, Big Data, Blockchain, VUI, Framework, UX Design, Growth Hack, DevOps, Programming, SEO, IT Management, ...
2017-12-17
Distributed Parallel Fault Tolerant File System with GlusterFS
WARP-WG Founder: https://warp-wg.org/
A member of IEEE, ACM, IEICE, Information Processing Society, IETF, ISOC, Artificial Central & Cranial Nerves, ScaleD.
@KyojiOsada
https://twitter.com/KyojiOsada/
@kyoji.osada
https://www.facebook.com/kyoji.osada/
# GitHub
https://github.com/KyojiOsada/
# Tech Blog for Japanese
https://qiita.com/KyojiOsada/
# Blog for Japanese
https://kyojiosada.hatenablog.com/
https://www.linkedin.com/in/kyojiosada/
2017-12-13
Seconds Access Limiter for Web API with Python, PHP, Ruby, and Perl
> to Japanese Pages1. Summary
In this article, I will describe the access limitation solution that is often required in Web APIs. In addition, I will exemplify “One-Second Access Limiter” which is one of access limit solutions using sample codes of Python, PHP, Ruby and Perl interpreter languages.2. Introduction
In the Web API service development project, we may be presented with requirements such as “access limitation within a certain period.” For example, the requirement is such that the Web API returns the HTTP status code of “429 Too Many Requests” when the number of accesses is exceeded. These designers and developers will be forced to improve the speed and reducing the load of this process. This is because if the resource load reduction is the purpose of access limitation, it is meaningless if the logic is increasing the load. In addition, when the reference time is short and the accuracy of the result is required, the accuracy of the algorithm is required. If you are an engineer with the experience of developing Web Application Firewall (WAF), you should already know these things. In the world, there are many access limitation solutions, but in this post I will provide a sample of “One-Second Access Limiter” as one of its solutions.3. Requirements
"Access limitation up to N times per second" 1. If the access exceeds N times per second, return the HTTP status code of "429 Too Many Requests" and block accesses. 2. However, the numerical value assigned to “N” depends on the specification of the project. 3. Because of the nature of access control for 1 second, this processing should not be a bottleneck of access processing capability.4. Key Points of Architectures
Even from the above requirements, it must be processed as fast and light as possible.# Prohibition of Use of Web Application Framework
Even if you are using a lightweight framework, loading a framework takes a lot of load. Therefore, this process should be implemented “before processing into the framework.”# Libraries Loading
In order to minimize the load due to library loading, it should focus on built-in processing.# Exception/Error Handling
Increasing the load by relying on the framework for exceptions and error handling makes no sense. These should be implemented simply in low-level code.# Data Resource Selection
It is better to avoid heavyweight data resources like RDBMS, but in this requirement "Eventual Consistency" is not a good idea. Realizing with Loadbalancer or Reverse Proxy is also one solution, but the more the application layer is handled, the more the processing cost of the whole communication is incurred. Semi-synchronization such as memory cache and lightweight NoSQL is one option, but in this paper I use file system as data resource. In order to prevent wait processing such as file locking, it is controlled by the file name and the number of files. However, in the case of a cluster environment, a data synchronization solution is necessary.5. Environments
The OS of sample codes is Linux. I prepared Python, PHP, Ruby, Perl as sample code languages. # "Python-3" Sample Code # "PHP-5" Sample Code # "Ruby-2" Sample Code # "Perl-5" Sample Code6. "Python" Sample Code
Seconds Access Limiter with Python. Version: Python-3
- #!/usr/bin/python
- # coding:utf-8
- import time
- import datetime
- import cgi
- import os
- from pathlib import Path
- import re
- import sys
- import inspect
- import traceback
- import json
- # Definition
- def limitSecondsAccess():
- try:
- # Init
- ## Access Timestamp Build
- sec_usec_timestamp = time.time()
- sec_timestamp = int(sec_usec_timestamp)
- ## Access Limit Default Value
- ### Depends on Specifications: For Example 10
- access_limit = 10
- ## Roots Build
- ### Depends on Environment: For Example '/tmp'
- tmp_root = '/tmp'
- access_root = os.path.join(tmp_root, 'access')
- ## Auth Key
- ### Depends on Specifications: For Example 'app_id'
- auth_key = 'app_id'
- ## Response Content-Type
- ### Depends on Specifications: For Example JSON and UTF-8
- response_content_type = 'Content-Type: application/json; charset=utf-8'
- ### Response Bodies Build
- ### Depends on Design
- response_bodies = {}
- # Authorized Key Check
- query = cgi.FieldStorage()
- auth_id = query.getvalue(auth_key)
- if not auth_id:
- raise Exception('Unauthorized', 401)
- # The Auth Root Build
- auth_root = os.path.join(access_root, auth_id)
- # The Auth Root Check
- if not os.path.isdir(auth_root):
- # The Auth Root Creation
- os.makedirs(auth_root, exist_ok=True)
- # A Access File Creation Using Micro Timestamp
- ## For example, other data resources such as memory cache or RDB transaction.
- ## In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
- ## However, in the case of a cluster configuration, file system synchronization is required.
- access_file_path = os.path.join(auth_root, str(sec_usec_timestamp))
- path = Path(access_file_path)
- path.touch()
- # The Access Counts Check
- access_counts = 0
- for base_name in os.listdir(auth_root):
- ## A Access File Path Build
- file_path = os.path.join(auth_root, base_name)
- ## Not File Type
- if not os.path.isfile(file_path):
- continue
- ## The Base Name Data Type Casting
- base_name_sec_usec_timestamp = float(base_name)
- base_name_sec_timestamp = int(base_name_sec_usec_timestamp)
- ## Same Seconds Stampstamp
- if sec_timestamp == base_name_sec_timestamp:
- ### A Overtaken Processing
- if sec_usec_timestamp < base_name_sec_usec_timestamp:
- continue
- ### Access Counts Increment
- access_counts += 1
- ### Too Many Requests
- if access_counts > access_limit:
- raise Exception('Too Many Requests', 429)
- continue
- ## Past Access Files Garbage Collection
- if sec_timestamp > base_name_sec_timestamp:
- os.remove(file_path)
- except Exception as e:
- # Exception Tuple to HTTP Status Code
- http_status = e.args[0]
- http_code = e.args[1]
- # 4xx
- if http_code >= 400 and http_code <= 499:
- # logging
- ## snip...
- # 5xx
- elif http_code >= 500:
- # logging
- # snip...
- ## The Exception Message to HTTP Status
- http_status = 'foo'
- else:
- # Logging
- ## snip...
- # HTTP Status Code for The Response
- http_status = 'Internal Server Error'
- http_code = 500
- # Response Headers Feed
- print('Status: ' + str(http_code) + ' ' + http_status)
- print(response_content_type + "\n\n")
- # A Response Body Build
- response_bodies['message'] = http_status
- response_body = json.dumps(response_bodies)
- # The Response Body Feed
- print(response_body)
- # Excecution
- limitSecondsAccess()
7. "PHP" Sample Code
Seconds Access Limiter with PHP Version: PHP-5
- <?php
- # Definition
- function limitSecondsAccess()
- {
- try {
- # Init
- ## Access Timestamp Build
- $sec_usec_timestamp = microtime(true);
- list($sec_timestamp, $usec_timestamp) = explode('.', $sec_usec_timestamp);
- ## Access Limit Default Value
- ### Depends on Specifications: For Example 10
- $access_limit = 10;
- ## Roots Build
- ### Depends on Environment: For Example '/tmp'
- $tmp_root = '/tmp';
- $access_root = $tmp_root . '/access';
- ## Auth Key
- ### Depends on Specifications: For Example 'app_id'
- $auth_key = 'app_id';
- ## Response Content-Type
- ## Depends on Specifications: For Example JSON and UTF-8
- $response_content_type = 'Content-Type: application/json; charset=utf-8';
- ## Response Bodies Build
- ### Depends on Design
- $response_bodies = array();
- # Authorized Key Check
- if (empty($_REQUEST[$auth_key])) {
- throw new Exception('Unauthorized', 401);
- }
- $auth_id = $_REQUEST[$auth_key];
- # The Auth Root Build
- $auth_root = $access_root . '/' . $auth_id;
- # The Auth Root Check
- if (! is_dir($auth_root)) {
- ## The Auth Root Creation
- if (! mkdir($auth_root, 0775, true)) {
- throw new Exception('Could not create the auth root. ' . $auth_root, 500);
- }
- }
- # A Access File Creation Using Micro Timestamp
- /* For example, other data resources such as memory cache or RDB transaction.
- * In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
- * However, in the case of a cluster configuration, file system synchronization is required.
- */
- $access_file_path = $auth_root . '/' . strval($sec_usec_timestamp);
- if (! touch($access_file_path)) {
- throw new Exception('Could not create the access file. ' . $access_file_path, 500);
- }
- # The Auth Root Scanning
- if (! $base_names = scandir($auth_root)) {
- throw new Exception('Could not scan the auth root. ' . $auth_root, 500);
- }
- # The Access Counts Check
- $access_counts = 0;
- foreach ($base_names as $base_name) {
- ## A current or parent dir
- if ($base_name === '.' || $base_name === '..') {
- continue;
- }
- ## A Access File Path Build
- $file_path = $auth_root . '/' . $base_name;
- ## Not File Type
- if (! is_file($file_path)) {
- continue;
- }
- ## The Base Name to Integer Data Type
- $base_name_sec_timestamp = intval($base_name);
- ## Same Seconds Timestamp
- if ($sec_timestamp === $base_name_sec_timestamp) {
- ## The Base Name to Float Data Type
- $base_name_sec_usec_timestamp = floatval($base_name);
- ### A Overtaken Processing
- if ($sec_usec_timestamp < $base_name_sec_usec_timestamp) {
- continue;
- }
- ### Access Counts Increment
- $access_counts++;
- ### Too Many Requests
- if ($access_counts > $access_limit) {
- throw new Exception('Too Many Requests', 429);
- }
- continue;
- }
- ## Past Access Files Garbage Collection
- if ($sec_timestamp > $base_name_sec_timestamp) {
- @unlink($file_path);
- }
- }
- } catch (Exception $e) {
- # The Exception to HTTP Status Code
- $http_code = $e->getCode();
- $http_status = $e->getMessage();
- # 4xx
- if ($http_code >= 400 && $http_code <= 499) {
- # logging
- ## snip...
- # 5xx
- } else if ($http_code >= 500) {
- # logging
- ## snip...
- # The Exception Message to HTTP Status
- $http_status = 'foo';
- # Others
- } else {
- # Logging
- ## snip...
- # HTTP Status Code for The Response
- $http_status = 'Internal Server Error';
- $http_code = 500;
- }
- # Response Headers Feed
- header('HTTP/1.1 ' . $http_code . ' ' . $http_status);
- header($response_content_type);
- # A Response Body Build
- $response_bodies['message'] = $http_status;
- $response_body = json_encode($response_bodies);
- # The Response Body Feed
- exit($response_body);
- }
- }
- # Execution
- limitSecondsAccess();
- ?>
8. "Ruby" Sample Code
Seconds Access Limiter with Ruby Version: Ruby-2
- # Definition#!/usr/bin/ruby
- # -*- coding: utf-8 -*-
- require 'time'
- require 'fileutils'
- require 'cgi'
- require 'json'
- def limitScondsAccess
- begin
- # Init
- ## Access Timestamp Build
- time = Time.now
- sec_timestamp = time.to_i
- sec_usec_timestamp_string = "%10.6f" % time.to_f
- sec_usec_timestamp = sec_usec_timestamp_string.to_f
- ## Access Limit Default Value
- ### Depends on Specifications: For Example 10
- access_limit = 10
- ## Roots Build
- ### Depends on Environment: For Example '/tmp'
- tmp_root = '/tmp'
- access_root = tmp_root + '/access'
- ## Auth Key
- ### Depends on Specifications: For Example 'app_id'
- auth_key = 'app_id'
- ## Response Content-Type
- ### Depends on Specifications: For Example JSON and UTF-8
- response_content_type = 'application/json'
- response_charset = 'utf-8'
- ## Response Bodies Build
- ### Depends on Design
- response_bodies = {}
- # Authorized Key Check
- cgi = CGI.new
- if ! cgi.has_key?(auth_key) then
- raise 'Unauthorized:401'
- end
- auth_id = cgi[auth_key]
- # The Auth Root Build
- auth_root = access_root + '/' + auth_id
- # The Auth Root Check
- if ! FileTest::directory?(auth_root) then
- # The Auth Root Creation
- if ! FileUtils.mkdir_p(auth_root, :mode => 0775) then
- raise 'Could not create the auth root. ' + auth_root + ':500'
- end
- end
- # A Access File Creation Using Micro Timestamp
- ## For example, other data resources such as memory cache or RDB transaction.
- ## In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
- ## However, in the case of a cluster configuration, file system synchronization is required.
- access_file_path = auth_root + '/' + sec_usec_timestamp.to_s
- if ! FileUtils::touch(access_file_path) then
- raise 'Could not create the access file. ' + access_file_path + ':500'
- end
- # The Access Counts Check
- access_counts = 0
- Dir.glob(auth_root + '/*') do |access_file_path|
- # Not File Type
- if ! FileTest::file?(access_file_path) then
- next
- end
- # The File Path to The Base Name
- base_name = File.basename(access_file_path)
- # The Base Name to Integer Data Type
- base_name_sec_timestamp = base_name.to_i
- # Same Seconds Timestamp
- if sec_timestamp == base_name_sec_timestamp then
- ### The Base Name to Float Data Type
- base_name_sec_usec_timestamp = base_name.to_f
- ### A Overtaken Processing
- if sec_usec_timestamp < base_name_sec_usec_timestamp then
- next
- end
- ### Access Counts Increment
- access_counts += 1
- ### Too Many Requests
- if access_counts > access_limit then
- raise 'Too Many Requests:429'
- end
- next
- end
- # Past Access Files Garbage Collection
- if sec_timestamp > base_name_sec_timestamp then
- File.unlink access_file_path
- end
- end
- # The Response Feed
- cgi.out({
- ## Response Headers Feed
- 'type' => 'text/html',
- 'charset' => response_charset,
- }) {
- ## The Response Body Feed
- ''
- }
- rescue => e
- # Exception to HTTP Status Code
- messages = e.message.split(':')
- http_status = messages[0]
- http_code = messages[1]
- # 4xx
- if http_code >= '400' && http_code <= '499' then
- # logging
- ## snip...
- # 5xx
- elsif http_code >= '500' then
- # logging
- ## snip...
- # The Exception Message to HTTP Status
- http_status = 'foo'
- else
- # Logging
- ## snip...
- # HTTP Status Code for The Response
- http_status = 'Internal Server Error'
- http_code = '500'
- end
- # The Response Body Build
- response_bodies['message'] = http_status
- response_body = JSON.generate(response_bodies)
- # The Response Feed
- cgi.out({
- ## Response Headers Feed
- 'status' => http_code + ' ' + http_status,
- 'type' => response_content_type,
- 'charset' => response_charset,
- }) {
- ## The Response Body Feed
- response_body
- }
- end
- end
- limitScondsAccess
9. "Perl" Sample Code
Seconds Access Limiter with Perl Version: Perl-5
- #!/usr/bin/perl
- use strict;
- use warnings;
- use utf8;
- use Time::HiRes qw(gettimeofday);
- use CGI;
- use File::Basename;
- use JSON;
- # Definition
- sub limitSecondsAccess {
- eval {
- # Init
- ## Access Timestamp Build
- my ($sec_timestamp, $usec_timestamp) = gettimeofday();
- my $sec_usec_timestamp = ($sec_timestamp . '.' . $usec_timestamp) + 0;
- ## Access Limit Default Value
- ### Depends on Specifications: For Example 10
- my $access_limit = 10;
- ## Roots Build
- ### Depends on Environment: For Example '/tmp'
- my $tmp_root = '/tmp';
- my $access_root = $tmp_root . '/access';
- ## Auth Key
- ### Depends on Specifications: For Example 'app_id'
- my $auth_key = 'app_id';
- ## Response Content-Type
- ## Depends on Specifications: For Example JSON and UTF-8
- ## Response Bodies Build
- ### Depends on Design
- my %response_bodies;
- # Authorized Key Check
- my $CGI = new CGI;
- if (! defined($CGI->param($auth_key))) {
- die('Unauthorized`401`');
- }
- my $auth_id = $CGI->param($auth_key);
- # The Auth Root Build
- my $auth_root = $access_root . '/' . $auth_id;
- # The Access Root Check
- if (! -d $access_root) {
- ## The Access Root Creation
- if (! mkdir($access_root)) {
- die('Could not create the access root. ' . $access_root . '`500`');
- }
- }
- # The Auth Root Check
- if (! -d $auth_root) {
- ## The Auth Root Creation
- if (! mkdir($auth_root)) {
- die('Could not create the auth root. ' . $auth_root . '`500`');
- }
- }
- # A Access File Creation Using Micro Timestamp
- ## For example, other data resources such as memory cache or RDB transaction.
- ## In the case of this sample code, it is lightweight because it does not require file locking and transaction processing.
- ## However, in the case of a cluster configuration, file system synchronization is required.
- my $access_file_path = $auth_root . '/' . $sec_usec_timestamp;
- if (! open(FH, '>', $access_file_path)) {
- close FH;
- die('Could not create the access file. ' . $access_file_path . '`500`');
- }
- close FH;
- # The Auth Root Scanning
- my @file_pathes = glob($auth_root . "/*");
- if (! @file_pathes) {
- die('Could not scan the auth root. ' . $auth_root . '`500`');
- }
- # The Access Counts Check
- my $access_counts = 0;
- foreach my $file_path (@file_pathes) {
- ## Not File Type
- if (! -f $file_path) {
- next;
- }
- ## The Base Name Extract
- my $base_name = basename($file_path);
- ## The Base Name to Integer Data Type
- my $base_name_sec_timestamp = int($base_name);
- ## Same Seconds Timestamp
- if ($sec_timestamp eq $base_name_sec_timestamp) {
- ## The Base Name to Float Data Type
- my $base_name_sec_usec_timestamp = $base_name;
- ### A Overtaken Processing
- if ($sec_usec_timestamp lt $base_name_sec_usec_timestamp) {
- next;
- }
- ### Access Counts Increment
- $access_counts++;
- ### Too Many Requests
- if ($access_counts > $access_limit) {
- die("Too Many Requests`429`");
- }
- next;
- }
- ## Past Access Files Garbage Collection
- if ($sec_timestamp gt $base_name_sec_timestamp) {
- unlink($file_path);
- }
- }
- };
- if ($@) {
- # Error Elements Extract
- my @e = split(/`/, $@);
- # Exception to HTTP Status Code
- my $http_status = $e[0];
- my $http_code = '0';
- if (defined($e[1])) {
- $http_code = $e[1];
- }
- # 4xx
- if ($http_code ge '400' && $http_code le '499') {
- # logging
- ## snip...
- # 5xx
- } elsif ($http_code ge '500') {
- # logging
- ## snip...
- ## The Exception Message to HTTP Status
- $http_status = 'foo';
- # Others
- } else {
- # logging
- ## snip...
- $http_status = 'Internal Server Error';
- $http_code = '500';
- }
- # Response Headers Feed
- print("Status: " . $http_code . " " . $http_status . "\n");
- print('Content-Type: application/json; charset=utf-8' . "\n\n");
- # A Response Body Build
- my %response_bodies;
- $response_bodies{'message'} = $http_status;
- $a = \%response_bodies;
- my $response_body = encode_json($a);
- # The Response Body Feed
- print($response_body);
- }
- }
- # #Excecution
- &limitSecondsAccess();
10. Conclusion
In this post, I exemplified a sample of “One-Second Access limiter” solution using Python, PHP, Ruby and Perl interpreter languages. Because of the nature of “access control for one second”, it will be understood that low load, high speed processing and data consistency are required. Therefore, although there are some important points, they are as described in the architecture section. In this post, I showed a solution using file name and file number of file system. However, in a clustered environment, it is unsuitable for this architecture if the selected data synchronization solution is slow. In such cases, the asynchronous data architecture may be one of the options rather. In such a case, control is made on a per-node basis. Furthermore, the importance of the load balancing threshold is increased, and the precision of the access limitation and consistency of the result must be abandoned. However, unless precision of access limitation and consistency of results are required, it is also one.
WARP-WG Founder: https://warp-wg.org/
A member of IEEE, ACM, IEICE, Information Processing Society, IETF, ISOC, Artificial Central & Cranial Nerves, ScaleD.
@KyojiOsada
https://twitter.com/KyojiOsada/
@kyoji.osada
https://www.facebook.com/kyoji.osada/
# GitHub
https://github.com/KyojiOsada/
# Tech Blog for Japanese
https://qiita.com/KyojiOsada/
# Blog for Japanese
https://kyojiosada.hatenablog.com/
https://www.linkedin.com/in/kyojiosada/
Subscribe to:
Posts (Atom)