Hi all in this post I will be discussing two web server packages, one is Apache which already has shown its ability to do multiple things, in a single package with the help of modules, and millions of websites in the internet is powered by Apache platform. The other is the relatively new web server package called Nginx made by Russian programmer Igor Sysoev.

Many people in the industry might be aware of the thing called “Speed” for which Nginx is famous for. There are some other important difference between Apache and Nginx’s working model, we will be discussing, that differences in detail.


Lets discuss two working models used by the Apache web server. We will get to Nginx later. Most people who are associated with Apache might be knowing about these two models, through which Apache servers its requests. These models are mentioned below.

1.Apache MPM Prefork
2.Apache MPM Worker.

Note: there are many different MPM modules available, for different platforms and functionalities but we will be discussing only the above two here.
Lets have a look at the main difference between MPM Prefork and MPM Worker. MPM stands for “Multi Processing Module“.

MPM Prefork :

Most of the functionality in Apache comes from modules, even this MPM Prefork comes as a module, and can be enabled or disabled. This prefork model of Apache is non-threaded and is a good model, as it makes each and every connection isolated from each other.
So if one connection is is having some issues, the other one is not all effected. By default if no MPM module is specified then Apache uses this MPM Prefork as its MPM module. But this model is very resource intensive.

Why is Prefork model resource intensive?

Because in this model a single parent process sits and creates many child processes which wait for requests and serve as the requests arrive. Which means each and every request is served by a separate process. In other words, we can say its “process per request”. And Apache maintains, several number of idle process before the requests arrive. Due to these idle processes waiting for requests, they can serve fast when requests arrive.

But each and every process will utilize system resources like RAM, and CPU. And equal amount of RAM is utilized for each and every process.

prefork model

If you have got a lot number of requests at one time, then you will have lot number of child processes spawned by Apache, and which will result in heavy resource utilization, as each process will utilize a certain amount of system memory and CPU resources.

MPM Worker :

This model of Apache can serve a large number of requests with less system resources than the prefork model because here a limited number of process will serve, many number of requests.
This is multi threaded architecture of Apache. This model uses thread rather than process to serve requests. Now what is thread??
In Operating System’s thread is a small instance of a process which does some job and exits. Thread is sometimes called a process inside a process.

apache worker

In this model also there is one single parent process, which spawns some child processes. But there is no “process per requests”, but instead “thread per requests”. So the child process will have a certain number of threads inside it. Each child process will have certain “server threads” and certain “idle threads”. Idle threads are waiting for new requests, so there is no time wasted in creating threads when the requests arrive.
There is a directive inside Apache config file /etc/httpd/conf/httpd.conf called “StartServers” which says how many child process will be there when Apache starts.
Child process handles requests with the help of a fixed number of threads inside them which is specified by the argument “ThreadsPerChild” in the config file.

Note: there are some php module issues reported while working with apache MPM worker model.

Now lets discuss Nginx.


Nginx was made, to solve the c10k problem in Apache.

C10k : It is a name given to the issue of optimizing the web server software to handle large number of requsts at one time. In the range of 10000 requests at a time, hence the name
Nginx is known for its speed in serving static pages, much faster than Apache and keeping the machine resources very low.
Fundamentally both Apache and Nginx differs a lot.
Apache works in a multi process/multi threaded architecture, While Nginx is an event driven single threaded architecture. [I will come back to event driven later]. The main difference this event driven architecture makes is that, a very small number of Nginx worker process can serve a very very large number of requests.
Sometimes Nginx is also deployed as a front end server, serving static content requests faster to the clients, and Apache in behind.
Each worker process handles requests with the help of the event driven model. Nginx does this with the help of a special functionality in Linux kernel called as epoll and select poll. Apache when even run by its threaded model utilizes considerably much more system resource than Nginx.

Why does Nginx run more efficiently than Apache?

In Apache when a request is being served, either a thread or a process is created which serves the request. Now if one request needs some data from the database,and files from disk, etc the process waits for that.
So some processes in Apache just sits and wait for certain task to complete (eating system resources).
Suppose a client with a slow internet connection connects to a web server running Apache, the Apache server retrieves the data from the disk, to serve the client. Now even after serving the client that process will wait until a confirmation is received from that client (which will waste that much process resource).
Nginx avoids the idea of child processes. All requests are handled by a single thread. And this single thread will handle everything, with the help of something called as event loop. So the thread pops up whenever a new connection, or some thing is required(not wasting resources.).

Step 1: Gets Request.
Step 2: Request Triggers events inside the process.
Step 3: Process manages all these events and returns the output (and simultaneously handles other events for other requests).

Nginx also supports major functionalities which Apache supports like the following.

  • Virtual Hosts
  • Reverse Proxy
  • Load Balencer
  • Compression
  • URL rewrite

are some of them…

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!


Amazon Web Services vs. Microsoft Azure vs. Google Compute Platform

The rivalry is warming up in the cloud space as vendors are offering innovative features and frequently reducing prices. In this blog, we will try to highlight the competition among the three titans of the cloud: Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft’s Azure. Which of these 3 will thrive and win the battle, time will tell.  We have IBM Softlayer and Alibaba’s AliCloud joining the bandwagon.

Although AWS (Amazon Web Services) has a noteworthy head start, Microsoft and Google are not out of the race. Today, Google is developing 12 new cloud data centers over the next 18 months. Both of these cloud vendors have the money, power, marketing bling and technology to draw enterprise and individual customers.

For helping in the enhanced decision making, this article provides a brief breakdown of these three market giants. The article will also try to explain the advantages realized by commissioning a multi-cloud strategy.

IAAS Quadrant_2016

Source: Gartner (August 2016)

Amazon Web Services

AWS has well organized and distributed data centers commissioned across the globe. Availability Zones are placed at quite a distance from each other to avoid any impact of a failure of an Availability Zone on the other.


Microsoft has been quickly building more and more data centers all over the world to catch up with Amazon’s vast geographical presence. From six regions in 2011, they currently have 22 regions, each of which contains one or more data centers, with five additional regions, planned to open in 2016. While Amazon was the first to open a region in China, Microsoft preceded to open the India region at the end of 2015.


Google has the least impression out of the three cloud providers. Google makes up for its geographical limitations with the help of its worldwide network infrastructure, providing low-latency and high-speed connectivity within its data centers, both at a regional and interregional level.


Amazon’s Elastic Compute Cloud (EC2) offers core compute service, enabling its users to form virtual machines with the help of pre-configured or custom based AMIs. You can choose the power, size, number of VMs, memory capacity and select from diverse availability zones from which to launch. It also provides auto-scaling and ELB (load balancing). ELB allocates charges through instances for improved performance, and auto-scaling enables its users to spontaneously and automatically scale available EC2 (Elastic Compute Cloud) volume high or low.

Lately in 2012, Google launched its cloud computing service known as GCE (Google Compute Engine). GCE allows its users to start VMs, much like AWS, into availability groups and regions. Though, Google Compute Engine was not accessible for everyone till 2013. Subsequently, Google added improvements, such as comprehensive support for Operating Systems, load balancing, quicker persistent disks, live migration of virtual machines and instances with more cores.

Similarly in the year 2012, Microsoft launched its cloud compute services, but the same was not normally accessible till May 2013.  Its users select a Virtual Hard Disk (VHD), which is similar to Amazon’s AMI, for VM creation. A Virtual Hard Disk could also be predefined by third parties, by Microsoft or even by the user. With every virtual machine, you are required to specify the amount of memory and number of cores.


Storage is one of the primary elements of IT. The article discusses storage service assistances from these three large cloud providers on the two primary storage types: Block storage and Object storage.


Amazon offers its block storage service which is known as EBS (Elastic Block Storage), and it can support three different types of persistent disks viz. SSD, Magnetic, and SSD with provisioned Input/Output Operations Per Second (IOPS). The volume sizes range from a maximum of 1TB for magnetic disks, to 16TB for SSD.

Amazon’s world-leading object storage service known as S3 (Simple Storage Service), has four different SLAs viz. standard, reduced redundancy, regular – infrequent access, and Glacier. All the data is deposited in a single availability zone unless it is simulated manually over regions or availability zones.


Microsoft’s refers its storage services as Blobs. Disks and Page Blobs are its block storage service. It can be sourced as Premium or standard, with volumes sizes of 1TB. Block Blobs is its object storage service. Alike Amazon, it also offers four different SLAs viz. LRS (Locally redundant storage) where terminated data copies are kept inside the same data center; ZRS (zone redundant storage), where copies of redundant data are maintained in diverse data centers in the same region; and GRS (geographically redundant storage) which executes LRS (Locally redundant storage) on two detached data centers, for the maximum level of availability and durability.


In Google cloud computing space, storage is structured a bit contrarily as compared with its other two competitors. Block storage does not have a particular category but has an add-on to instances within Google Cloud Engine (GCE). Google provides two choices: magnetic or SSD volumes; though the IOPS tally is static. The ephemeral disk is completely configurable and is a chunk of the storage offering. Object storage known as Google Storage is divided into three modules viz. Standard, Durable Reduced Availability for less or non-critical data and Nearline, for archives.


Amazon’s VPCs (Virtual Private Clouds) and Azure’s VNET (Virtual Network) enables their users to cluster virtual machines into remote networks in the cloud. Using VNETs and VPCs, there users can outline a network topology, create route tables, subnets, network gateways and private IP address ranges. There’s nothing much to choose between them over this as both have ways to extend it to your on-premise data center into the public cloud. Whereas, every GCE instance has a single network, which outlines the gateway address and address range for all instances linked to it. You can apply firewall rules to an instance, and it can accept a public IP address.

Billing Structure

Amazon Web Services 

AWS categorizes resources under accounts. Each account comprises a single billing unit within which cloud resources are provisioned. Establishments with numerous AWS accounts, though, would wish to obtain one single combined bill instead of several separate bills. AWS permits this by generating a single consolidated billing. In AWS one of the accounts is selected as unified account and other accounts are connected to it, henceforth linked accounts. The bills are then combined to contain billing for all of the consolidated account and linked accounts, together is referred to as consolidated billing account family.


Microsoft engages a tiered approach to accounts management. The subscription is the lowermost in the ladder, and the individual one truly consumes and provisions resources. An account manages several subscriptions. It might sound like AWS account structure, but Microsoft’s Azure accounts are management units, and they do not use resources by themselves. For establishments without MS Enterprise Agreements, it is where the grading ends. Those with Enterprise Agreements may register their Enterprise Agreements in Azure, and can manage accounts under them, with department administrative and discretionary cost center hierarchies.


Google uses a flat pyramid structure for its billing. The resources are clustered under groups known as Projects. There is no entity higher than projects; nevertheless, several projects could be gathered under a consolidated billing account. This billing statement is similar to Azure’s accounts in the sense as these billing statements are not a consuming entity and also cannot provision services.


Cloud service vendors are providing different pricing and discounts models for their cloud services. Maximum of all such complex pricing and discounts models circle compute services, whereas bulk discounts are typically used with all remaining services. It is primarily due to two reasons. Firstly, the vendors are placed in a very competitive market, and they would want to lock-in their users for a long-term commitment. Secondly, it includes an interest to make the most of the use of their infrastructure, where for each available VM hour in represents a real loss.

Amazon Web Services

AWS has the most diversified and complex pricing models for its Elastic Compute Cloud (EC2) services:

On-demand: clienteles pay for what they use without paying any upfront cost.

Reserved Instances: customers reserve instances for one or three years with an upfront cost based on the use. Payment options include:

  • All-upfront: Here the customer pays for the total commitment upfront and receives the uppermost discount rate
  • Partial-upfront: Here the customer pays 50-70 percent of the commitment upfront, and the remaining is paid in monthly installments. Here the client receives a somewhat lower discount as compared to all upfront.
  • No-upfront: Here the customer pays nothing upfront, and the sum is paid in monthly installments over the term of the reservation. Considerably lower discount is received by the client under this payment option. 


Microsoft bills its clienteles by rounding up the utilized number of minutes on demand. Azure also provides short-term obligations with discounts. Discounts are offered only for bulk financial commitments, through pre-paid subscriptions, which provides 5 percent discount on the bill, or through Microsoft’s Enterprise Agreements, where higher discounts may be applied to an upfront financial obligation by the client.


GCP bills for instances by rounding up the number utilized minutes, with 10 minutes as a minimum base. It recently declared new sustained-use pricing for computing services offering more flexible and a simpler approach. The sustained-use pricing will automatically discount the on-demand baseline hourly rate as a particular instance is used for a larger percentage of the month.

The Bottom Line

The public cloud war slogs on.  With cloud computing in an early maturing stage, it is tough to foresee exactly how things might change in future. However, it is possible to comprehend that prices might continue to drop, and attractive and innovative features may continue to appear. Cloud computing is here to stay and with the growing maturity of private and public cloud platforms with IaaS massive adoption, enterprises now understand that depending on a single cloud vendor is not a long-term option. Issues such as vendor lock-in, higher availability and leveraging the competitive pricing may push enterprises to look for an optimal mix of clouds for their requirement, rather than a sole provider.

OK, folks.. !!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!