Microsoft Azure Online Resources

Azure

I will try to list here all the useful links related to Microsoft Azure like management portal, documentation, training and tools.

Management portals:

Azure Portal (the new / modern /Ibiza portal)

Azure Classic Portal (legacy portal but still needed in some case)

Azure subscription management (check your subscription and billing)

Social:

Microsoft Tech Community Azure (official forum)

Microsoft Azure on Facebook

Microsoft Azure on Twitter

Support:

Azure Support on Twitter (free support if you don’t have support plan)

Azure status (health status of all Azure services across all regions)

Tooling:

Azure Resource Explorer (explore Azure resources as “code”)

armviz.io (graphic representation for ARM templates)

Azure Quickstart Templates (ARM templates on GitHub)

PowerShell Gallery Azure RM module

Visual Studio Code (free code editor)

Visual Studio IDE (free community edition)

Learning and training:

Azure Documentation (official documentation portal for all Azure products)

Microsoft Virtual Academy Azure Courses

Microsoft Learning OpenEdx (new learning platform for Azure)

Microsoft Mechanics Azure playlist (YouTube)

Microsoft Azure YouTube Channel

Microsoft Ignite YouTube Channel (not only Azure)

Channel 9 Azure Friday (also published on YouTube)

Offers:

Azure free trial (currently 200$ for one month)

Microsoft IT Pro Cloud Essentials (training and offers for IT Pro)

Visual Studio Dev Essentials (training and offers for Developers)

I will keep posting new articles regarding “Microsoft Azure” for sure so that you will get some more insights into Microsoft Azure.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

 

Advertisements

How to record your Putty Terminal session in RHEL?

In this post, we’re going to look How to record your Putty terminal session in RHEL machines.

Just in a simple way,

Open a new Putty terminal and run below command to start recording,

# script -t 2> timing.log -a ouput.session

t – Dump the timing data to STDERR

2> – To redirect the STDERR to timing.log

To stop the recording press crtl+d keys.

To play the recorded content run below command,

# scriptreplay timing.log output.session

It’s very easy to save your time to review the tasks which was performed on the server.

Hope you will like this feature. If any queries, leave your comment below.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Storage Terminology Concepts in Linux

Introduction

Linux has robust systems and tooling to manage hardware devices, including storage drives. In this post I’ll cover, at a high level, how Linux represents these devices and how raw storage is made into usable space on the server.

What is Block Storage?

Block storage is another name for what the Linux kernel calls a block device. A block device is a piece of hardware that can be used to store data, like a traditional spinning hard disk drive (HDD), solid state drive (SSD), flash memory stick, etc. It is called a block device because the kernel interfaces with the hardware by referencing fixed-size blocks, or chunks of space.

So basically, block storage is what you think of as regular disk storage on a computer. Once it is set up, it basically acts as an extension of the current filesystem tree, and you can write to or read information from the drive seamlessly.

What are Disk Partitions?

Disk partitions are a way of breaking up a storage drive into smaller usable units. A partition is a section of a storage drive that can be treated in much the same way as a drive itself.

Partitioning allows you to segment the available space and use each partition for a different purpose. This gives the user a lot of flexibility allowing them to potentially segment their installation for easy upgrading, multiple operating systems, swap space, or specialized filesystems.

While disks can be formatted and used without partitioning, some operating systems expect to find a partition table, even if there is only a single partition written to the disk. It is generally recommended to partition new drives for greater flexibility down the road.

MBR vs GPT

When partitioning a disk, it is important to know what partitioning format will be used. This generally comes down to a choice between MBR (Master Boot Record) and GPT (GUID Partition Table).

MBR is the traditional partitioning system, which has been in use for over 30 years. Because of its age, it has some serious limitations. For instance, it cannot be used for disks over 2TB in size, and can only have a maximum of four primary partitions. Because of this, the fourth partition is typically set up as an “extended partition”, in which “logical partitions” can be created. This allows you to subdivide the last partition to effectively allow additional partitions.

GPT is a more modern partitioning scheme that attempts to resolve some of the issues inherent with MBR. Systems running GPT can have many more partitions per disk. This is usually only limited by the restrictions imposed by the operating system itself. Additionally, the disk size limitation does not exist with GPT and the partition table information is available in multiple locations to guard against corruption. GPT can also write a “protective MBR” which tells MBR-only tools that the disk is being used.

In most cases, GPT is the better choice unless your operating system or tooling prevent you from using it.

Formatting and Filesystems

While the Linux kernel can recognize a raw disk, the drive cannot be used as-is. To use it, it must be formatted. Formatting is the process of writing a filesystem to the disk and preparing it for file operations. A filesystem is the system that structures data and controls how information is written to and retrieved from the underlying disk. Without a filesystem, you could not use the storage device for any file-related operations.

There are many different filesystem formats, each with trade-offs across a number of different dimensions, including operating system support. On a basic level, they all present the user with a similar representation of the disk, but the features that each supports and the mechanisms used to enable user and maintenance operations can be very different.

Some of the more popular filesystems for Linux are:

  • Ext4: The most popular default filesystem is Ext4, or the fourth version of the extended filesystem. The Ext4 filesystem is journaled, backwards compatible with legacy systems, incredibly stable, and has mature support and tooling. It is a good choice if you have no specialized needs.
  • XFS: XFS specializes in performance and large data files. It formats quickly and has good throughput characteristics when handling large files and when working with large disks. It also has live snapshotting features. XFS uses metadata journaling as opposed to journaling both the metadata and data. This leads to fast performance, but can potentially lead to data corruption in the event of an abrupt power loss.
  • Btrfs: Btrfs is modern, feature-rich copy-on-write filesystem. This architecture allows for some volume management functionality to be integrated within the filesystem layer, including snapshots, cloning, volumes, etc. Btrfs still runs into some problems when dealing with full disks. There is some debate over its readiness for production workloads and many system administrators are waiting for the filesystem to reach greater maturity.
  • ZFS: ZFS is a copy-on-write filesystem and volume manager with a robust and mature feature set. It has great data integrity features, can handle large filesystem sizes, has typical volume features like snapshotting and cloning, and can organize volumes into RAID and RAID-like arrays for redundancy and performance purposes. In terms of use on Linux, ZFS has a controversial history due to licensing concerns. Ubuntu is now shipping a binary kernel module for it however, and Debian includes the source code in its repositories. Support across other distributions is yet to be determined.

How Linux Manages Storage Devices?

Device Files in /dev

In Linux, almost everything is represented by a file. This includes hardware like storage drives, which are represented on the system as files in the /dev directory. Typically, files representing storage devices start with sd or hd followed by a letter. For instance, the first drive on a server is usually something like /dev/sda.

Partitions on these drives also have files within /dev, represented by appending the partition number to the end of the drive name. For example, the first partition on the drive from the previous example would be /dev/sda1.

While the /dev/sd* and /dev/hd* device files represent the traditional way to refer to drives and partitions, there is a significant disadvantage of in using these values by themselves. The Linux kernel decides which device gets which name on each boot, so this can lead to confusing scenarios where your devices change device nodes.

To work around this issue, the /dev/disk directory contains subdirectories corresponding with different, more persistent ways to identify disks and partitions on the system. These contain symbolic links that are created at boot back to the correct /dev/[sh]da* files. The links are named according to the directory’s identifying trait (for example, by partition label in for the /dev/disk/by-partlabel directory). These links will always point to the correct devices, so they can be used as static identifiers for storage spaces.

Some or all of the following subdirectories may exist under /dev/disk:

  • by-label: Most filesystems have a labeling mechanism that allows the assignment of arbitrary user-specified names for a disk or partition. This directory consists of links that named after these user-supplied labels.
  • by-uuid: UUIDs, or universally unique identifiers, are a long, unique string of letters and numbers that can be used as an ID for a storage resource. These are generally not very human-readable, but are pretty much guaranteed to be unique, even across systems. As such, it might be a good idea to use UUIDs to reference storage that may migrate between systems, since naming collisions are less likely.
  • by-partlabel and by-partuuid: GPT tables offer their own set of labels and UUIDs, which can also be used for identification. This functions in much the same way as the previous two directories, but uses GPT-specific identifiers.
  • by-id: This directory contains links generated by the hardware’s own serial numbers and the hardware they are attached to. This is not entirely persistent, because the way that the device is connected to the system may change its by-id name.
  • by-path: Like by-id, this directory relies on the storage devices connection to the system itself. The links here are constructed using the system’s interpretation of the hardware used to access the device. This has the same drawbacks as by-id as connecting a device to a different port can alter this value.

Usually, by-label or by-uuid are the best options for persistent identification of specific devices.

Mounting Block Devices

The device file within /dev are used to communicate with the Kernel driver for the device in question. However, a more helpful abstraction is needed in order to treat the device as a segment of available space.

In Linux and other Unix-like operating systems, the entire system, regardless of how many physical devices are involved, is represented by a single unified file tree. As such, when a filesystem on a drive or partition is to be used, it must be hooked into the existing tree. Mounting is the process of attaching a formatted partition or drive to a directory within the Linux filesystem. The drive’s contents can then be accessed from that directory.

Drives are almost always mounted on dedicated empty directories (mounting on a non-empty directory means that the directory’s usual contents will be inaccessible until the drive is unmounted). There are many different mounting options that can be set to alter the behavior of the mounted device. For example, the drive can be mounted in read-only mode to ensure that its contents won’t be altered.

The Filesystem Hierarchy Standard recommends using /mnt or a subdirectory under it for temporarily mounted filesystems. If this matches your use case, this is probably the best place to mount it. It makes no recommendations on where to mount more permanent storage, so you can choose whichever scheme you’d like. In many cases, /mnt or /mnt subdirectories are used for more permanent storage as well.

Making Mounts Permanent with /etc/fstab

Linux systems look at a file called /etc/fstab (filesystem table) to determine which filesystems to mount during the boot process. Filesystems that do not have an entry in this file will not be automatically mounted (the exception being those defined by systemd .mount unit files, although these are not common at the moment).

The /etc/fstab file is fairly simple. Each line represents a different filesystem that should be mounted. This line specifies the block device, the mount point to attach it to, the format of the drive, and the mount options, as well as a few other pieces of information.

More Complex Storage Management

While most simple use cases do not need additional management structures, more performance, redundancy, or flexibility can be obtained by more complex management paradigms.

What is RAID?

RAID stands for redundant array of independent disks. RAID is a storage management and virtualization technology that allows you to group drives together and manage them as a single unit with additional capabilities.

The characteristics of a RAID array depend on its RAID level, which basically defines how the disks in the array relate to each other. The level chosen has an impact on the performance and redundancy of the set. Some of the more common levels are:

  • RAID 0: This level indicates drive striping. This means that as data is written to the array, it is split up and distributed among the disks in the set. This offers a performance boost as multiple disks can be written to or read from simultaneously. The downside is that a single drive failure can lose all of the data in the entire array, since no one disk contains enough information about the contents to rebuild.
  • RAID 1: RAID 1 is basically drive mirroring. Anything written to a RAID 1 array is written to multiple disks. The main advantage is data redundancy, which allows data to survive hard drive lose in either side of the mirror. Because multiple drives contain the same data, usable capacity is reduced half.
  • RAID 5: RAID 5 stripes data across multiple drives, similar to RAID 0. However, this level also implements a distributed parity across the drives. This basically means that if drive fails, the remaining drives can rebuild the array using the parity information shared between them. The parity information is enough to rebuild any one disk, meaning the array can survive any one disk loss. The parity information reduces the available space in the array by the capacity of one disk.
  • RAID 6: RAID 6 has the same properties as RAID 5, but provides double parity. This means that RAID 6 arrays can withstand the loss of any 2 drives. The capacity of the array is again affected by the parity amount, meaning that the usable capacity is reduced by two disks worth of space.
  • RAID 10: RAID 10 is a combination of levels 1 and 0. First, two sets of mirrored arrays are made. Then, data is striped across them. This creates an array that has some redundancy characteristics while providing good performance. This requires quite a few drives however, and the total capacity is half of the combined disk space.

What is LVM?

LVM, or Logical Volume Management, is a system that abstracts the physical characteristics of the underlying storage devices in order to provide increased flexibility and power. LVM allows you to create groups of physical devices and manage it as if it were one single block of space. You can then segment the space as needed into logical volumes, which function as partitions.

LVM is implemented on top of regular partitions, and works around many of the limitations inherent with classical partitions. For instance, using LVM volumes, you can easily expand partitions, create partitions that span multiple drives, take live snapshots of partitions, and moving volumes to different physical disks. LVM can be used in conjunction with RAID to provide flexible management with traditional RAID performance characteristics.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

wp-config.php File – An In-Depth View on How to Configure WordPress

One of the most important files of a WordPress installation is the configuration file. It resides in the root directory and contains constant definitions and PHP instructions that make WordPress work the way you want.
The wp-config.php file stores data like database connection details, table prefix, paths to specific directories and a lot of settings related to specific features we’re going to dive into in this post.

The basic wp-config.php file

When you first install WordPress, you’re asked to input required information like database details and table prefix. Sometimes your host will set-up WordPress for you, and you won’t be required to manually run the set-up. But when you’re manually running the 5-minute install, you will be asked to input some of the most relevant data stored into wp-config.

When you run the set-up, you will be required to input data that will be stored into wp-config.php file
When you run the set-up, you will be required to input data that will be stored into wp-config.php file

 

Here is a basic wp-config.php file:

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'database_name_here');

/** MySQL database username */
define('DB_USER', 'username_here');

/** MySQL database password */
define('DB_PASSWORD', 'password_here');

/** MySQL hostname */
define('DB_HOST', 'localhost');

/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');

/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

define('AUTH_KEY',	'put your unique phrase here');
define('SECURE_AUTH_KEY',	'put your unique phrase here');
define('LOGGED_IN_KEY',	'put your unique phrase here');
define('NONCE_KEY',	'put your unique phrase here');
define('AUTH_SALT',	'put your unique phrase here');
define('SECURE_AUTH_SALT',	'put your unique phrase here');
define('LOGGED_IN_SALT',	'put your unique phrase here');
define('NONCE_SALT',	'put your unique phrase here');

$table_prefix = 'wp_';

/* That's all, stop editing! Happy blogging. */

Usually, this file is automatically generated when you run the set-up, but occasionally WordPress does not have privileges to write in the installation folder. In this situation, you should create an empty wp-config.php file, copy and paste content from wp-config-sample.php, and set the proper values to all defined constants. When you’re done, upload your file into the root folder and run WordPress.

Note: constant definitions and PHP instructions come in a specific order we should never change. And we should never add contents under the following comment line:

/* That's all, stop editing! Happy blogging. */

First, come the definitions of database constants you should have received from your host:

  • DB_NAME
  • DB_USER
  • DB_PASSWORD
  • DB_HOST
  • DB_CHARSET
  • DB_COLLATE

Following database details, eight security keys will make the site more secure against hackers. When you run the installation WordPress will automatically generate security and salt keys, but you can change them anytime, adding any arbitrary string. For better security, consider to use the online generator.

$table_prefix variable stores the prefix of all WordPress tables. Unfortunately, anyone knows its default value and this could open WordPress database to a vulnerability, which can be easily fixed by setting a custom value for $table_prefix when running the set-up.
To change table prefix in a working website, you should run several queries against the database, then manually edit the wp-config.php file. If you don’t have access to the database or you don’t have the required knowledge to build custom queries, then you can install a plugin like Change Table Prefix that will rename database tables and field names, and update the config file with no risk.

Note: it’s a good practice to back-up WordPress files and database even if you will change the table prefix with a plugin.

So far the analysis has been limited to the basic configuration. But we have at our disposal many constants we can define to enable features, customize and secure the installation.

Over basic configuration: editing the file system

WordPress file system is well known by users and hackers. For this reason, you may consider to change the built-in file structure by moving specific folders in arbitrary locations and setting the corresponding URLs and paths in wp-config file.
First, we can move the content folder by defining two constants. The first one sets the full directory path:

define( 'WP_CONTENT_DIR', dirname(__FILE__) . '/site/wp-content' );

The second sets the new directory URL:

define( 'WP_CONTENT_URL', 'http://example.com/site/wp-content' );

We can move just the plugin folder by defining the following constants:

define( 'WP_PLUGIN_DIR', dirname(__FILE__) . '/wp-content/mydir/plugins' );
define( 'WP_PLUGIN_URL', 'http://example.com/wp-content/mydir/plugins' );

The same way, we can move the uploads folder, by setting the new directory path:

define( 'UPLOADS', 'wp-content/mydir/uploads' );

Note: All paths are relative to ABSPATH, and they should not contain a leading slash.

When done, arrange the folders and reload WordPress.

The image shows the built-in file structure compared to a customized structure
The image shows the built-in file structure compared to a customized structure

It’s not possible to move /wp-content/themes folder from the wp-config file, but we can register a new theme directory in a plugin or a theme’s functions file.

Features for developers: debug mode and saving queries

If you are a developer you can force WordPress to show errors and warnings that will help you in theme and plugin debugging. To enable debug mode you just have to set WP_DEBUG value to true, as shown below:

define( 'WP_DEBUG', true );

WP_DEBUG is set to false by default. If you need to disable debug mode, you can just remove the definition, or set the constant’s value to false.
When you’re working on a living site, you should disable debug mode. Errors and warnings should never be shown to site viewers because it can provide valuable information to hackers. But what if you have to debug anyway?
In such situations, you can force WordPress to keep memory of errors and warning in debug.log file, placed in /wp-content folder. To enable this feature, copy and paste the following code in your wp-config.php file:

define( 'WP_DEBUG', true );
define( 'WP_DEBUG_LOG', true );
define( 'WP_DEBUG_DISPLAY', false );
@ini_set( 'display_errors', 0 );

To make this feature work we first need to enable debug mode. Then, setting WP_DEBUG_LOGto true we force WordPress to store messages into debug.log file, while defining WP_DEBUG_DISPLAY to false we hide them from the screen. Finally, we set to 0 the value of PHP variable display_errors so that error messages won’t be printed to the screen. wp-config is never loaded from the cache. For this reason, it is a good place to override php.ini settings.

Note: This is a great feature you can take advantage of to register messages that WordPress would not print on the screen. As an example, when the publish_post action is triggered WordPress loads a script that saves data, then redirects the user to the post editing page. In this situation you can register messages, but not print them on the screen.

Another debugging constant determines the versions of scripts and styles to be loaded. Set SCRIPT_DEBUG to true if you want to load uncompressed versions:

define( 'SCRIPT_DEBUG', true );

If your theme or plugin shows data retrieved from the database, you may want to store query details for subsequent review. The SAVEQUERIES constant forces WordPress to store query information into $wpdb->queries array. These details would be printed adding the following code to the footer template:

if ( current_user_can( 'administrator' ) ) {
global $wpdb;
echo '<pre>';
print_r( $wpdb->queries );
echo '</pre>';
}

Content related settings

When your website grows up, you may want to reduce the number of post revisions. By default, WordPress automatically saves revisions each 60 seconds. We can change this value by setting a custom interval in wp-config as follows:

define( 'AUTOSAVE_INTERVAL', 160 );

Of course, you can decrease the auto-save interval, as well.
Each time we save our edits, WordPress adds a row to the posts table, so that we could restore previous revisions of posts and pages. This is a useful functionality that could turn into a problem when our site grows big. Fortunately, we can decrease the maximum number of post revisions to be stored, or disable the functionality at all.
If you’d want to disable post revisions, define the following constant:

define( 'WP_POST_REVISIONS', false );

If you’d want to limit the maximum number of revisions, instead, add the following line:

define( 'WP_POST_REVISIONS', 10 );

By default, WordPress stores trashed posts, pages, attachments and comments for 30 days, then deletes them permanently. We can change this value with the following constant:

define( 'EMPTY_TRASH_DAYS', 10 );

We can even disable trash, setting its value to 0, but consider that WordPress will not allow you to restore contents anymore.

Allowed memory size

Occasionally you may receive a message like the following:

Fatal error: Allowed memory size of xxx bytes exhausted …

The maximum memory size depends on the server configuration. In case you didn’t have access to php.ini file, you can increase memory limit just for WordPress by setting the WP_MEMORY_LIMIT constant in wp-config file. By default, WordPress try to allocate 40Mb to PHP for single sites and 64MB for multisite installations. Of course, if PHP allocated memory is greater than 40Mb (or 64Mb), WordPress will adopt the maximum value.
That being said, you can set a custom value with the following line:

define( 'WP_MEMORY_LIMIT', '128M' );

If needed, you can set a maximum memory limit, as well, with the following statement:

define( 'WP_MAX_MEMORY_LIMIT', '256M' );

Automatic updates

Starting from version 3.7, WordPress supports automatic updates for security releases. This is an important feature that allows site admins to keep their website secure all the time.
You can disable all automatic updates by defining the following constant:

define( 'AUTOMATIC_UPDATER_DISABLED', true );

Maybe it’s not a good idea to disable security updates, but it’s your choice.
By default, automatic updates do not work with major releases, but you can enable any core updates defining WP_AUTO_UPDATE_CORE as follows:

# Disables all core updates:
define( 'WP_AUTO_UPDATE_CORE', false );

# Enables all core updates, including minor and major:
define( 'WP_AUTO_UPDATE_CORE', true );

Default value is minor:

define( 'WP_AUTO_UPDATE_CORE', 'minor' );

An additional constant disables auto-updates (and any update or change to any file). If you set DISALLOW_FILE_MODS to true, all file edits will be disabled, even theme and plugin installations and updates. For this reason, its usage is not recommended.

Security settings

We can use wp-config file to increase site security. In addition to changes to the file structure we’ve looked at above, we can lock down some features that could open unnecessary vulnerabilities. First of all, we can disable the file editor provided in the admin panel. The following constant will hide the Appearance Editor screen:

define( 'DISALLOW_FILE_EDIT', true );

Note: consider that some plugins could not work properly if this constant is defined to true.

disallow_file_edit

A security feature is Administration over SSL. If you’ve purchased an SSL certificate, and it’s properly configured, you can force WordPress to transfer data over SSL at any login and admin session. Use the following constant:

define( 'FORCE_SSL_ADMIN', true );

Check the Codex if you need more information about Administration over SSL.

Other two constants allow to block external requests and list admitted hosts.

define( 'WP_HTTP_BLOCK_EXTERNAL', true );
define( 'WP_ACCESSIBLE_HOSTS', 'example.com,*.anotherexample.com' );

In this example, we have first disabled all accesses from external hosts, then listed allowed hosts, separated by commas (wildcards are allowed).

Other advanced settings

WP_CACHE set to true includes wp-content/advanced-cache.php script. This constant has effect only if you install a persistent caching plugin.

CUSTOM_USER_TABLE and CUSTOM_USER_META_TABLE are used to set custom user tables other than default wp_users and wp_usermeta tables. These constants enable a useful feature that allows site users to access several websites with just one account. For this feature to work, all installations should share the same database.

Starting from version 2.9, WordPress support Automatic Database Optimizing. Thanks to this feature, setting WP_ALLOW_REPAIR to true, WordPress will automatically repair a corrupted database.

WordPress creates a new set of images each time you edit an image. If you’d restore the original image, all generated sets will remain on the server. You can overwrite this behavior by setting IMAGE_EDIT_OVERWRITE to true, so that, when you restore the original image, all edits will be deleted from the server.

Lockdown wp-config.php

Now we know why wp-config.php is one of the most important WordPress files. So, why don’t we hide it to hackers? First of all, we can move wp-config one level above WordPress root folder (just one level). However, this technique is a bit controversial, so I would suggest to adopt other solutions to protect the file. If your website is running on Apache Web Server, you can add the following directives to .htaccess file:

<files wp-config.php>
order allow,deny
deny from all
</files>

If the website is running on Nginx, you can add the following directive to the configuration file:

location ~* wp-config.php { deny all; }

Note: these instructions should be added only after the set-up is complete.

Conclusions

In this post, I’ve listed a lot of WordPress constant that we can define into wp-config file. Some of these constants are of common usage, and their functions are easy to understand. Other constants enables advanced features that require a deep knowledge of WordPress and site administration.
I’ve listed the most common features, leaving apart some advanced features.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

AWS Announces: Lightsail, a Simple VPS Solution

With the release of AWS Lightsail, Amazon Web Services steps into the market of easy-to-use and quick-to-provision VPS servers. Currently offering both Ubuntu 16.04 and Amazon Linux AMI images, as well as Bitnami-powered application stacks, Lightsail allows users to spin up a server without any of the additional (and sometimes excess) services normally included in AWS.

Instance Tiers and Costs

As of launch, AWS offers five instance plans:

$5/Mo $10/Mo $20/Mo $40/Mo $80/Mo
512MB RAM 1 GB RAM 2GB RAM 4GB RAM 8GB RAM
1 vCPU 1 vCPU 1 vCPU 2 vCPUs 2 vCPUs
20GB SSD 30GB SSD 40GB SSD 60GB SSD 80GB SSD
1TB Data Transfer 2TB Data Transfer 3TB Data Transfer 4TB Data Transfer 5TB Data Transfer

Currently, the $5 a month plan is offered free for up to one month, or 750 hours. Additional costs include snapshot (backups) storage and data transfer overage charges; Lightsail technical support starts at $30/month.

Getting Started

You can log on to Amazon Lightsail using your regular AWS account at https://amazonlightsail.com. From here, it’s as easy as selecting Create Instance to get started.

1

On the Create an instance screen, you are prompted to select an Apps + OS image, powered by Bitnami, or a simple Base OS instance. There is no deviation in the deployment process, regardless of if you are launching a base OS image, or one containing an app.

From here, you can add a launch script, if desired. This is generally a series of commands, or Bash script you want to run when the instance is provisioning. For those coming from an AWS space, these are the same as any launch script you may input at the creation of an EC2 instance.

You can also change or add an SSH key pair. Every instance requires an SSH key, and cannot be created without one. Select Change SSH key pair if you wish to create a new key pair; otherwise, keep the default key pair selected. Then select your instance plan.

Additionally, you need to select an Availability Zone. Currently, Lightsail is only available in the N. Virginia region. Meaning, your VPS must be located in a N. Virginia data center. Within this region, there are four zones from which you can select. Choose a zone; if using other AWS services, this may impact which zone you choose.

Finally, name your instance, and select how many instances you wish to create. Create your instance.

The Instance Dashboard

Once you have an instance to work from, select that instance on the main, or Resources page. From here, you can further manage your VPS, starting from being able to Stop or Reboot your server, to more in-depth information regarding metrics and instance history.

Connect

2

Lightsail allows you to connect to your instance from your web browser. Select Connect using SSH to allow a pop-up window act as your terminal. This automatically logs you in as the default user. Lightsail also provides user with an IP address and SSH username. You will, however, need your key pair when logging in from a regular terminal. Your key pair can be downloaded from the Acount page of the Lightsail website. Ensure it has 400 permissions, then SSH in as normal:

ssh -i LightsailDefaultPrivateKey.cer ubuntu@123.45.67.89

Metrics

3

AWS provides basic metrics for all Lightsail instances. This includes CPU utilization, incoming and outgoing network traffic, and failed status checks. You can view these metrics between a timeline of one hour up to two weeks.

Networking

4

Networking provides you with information regarding your instance’s public and private IP addresses. You can add up to five static (unchanging) IP addresses to each instance, for free.

The networking tab also provides firewall control for your instance. From here, you can add, remove or otherwise alter firewall rules to limit access to your server.

Snapshots

5

Snapshots provide a way of taking an image of your system in its current state to use as a backup. Snapshots are billed monthly, and based on the amount of GB storage taken up by the snapshots themselves. You can have unlimited snapshots, but be cautious of the cost.

History

6

Your instance history contains information on services added or otherwise changed in relation to your created instance, as well as starts, stops and reboots.

Delete

7

Allows users to permanently destroy their image. Note that there is no turning back from deleting an instance. If you will need the instance again, consider only stopping the instance, then restarting it again in the future, when needed.

Additionally, your snapshots for that instance will not be deleted, and you will be responsible for the cost of those snapshots, unless also removed.

Advanced Lightasail & VPC Peering

All of the Lightsail instances within an account run within a “shadow” VPC that is not visible in the AWS Management Console. If the code that you are running on your Lightsail instances needs access to other AWS resources, you can set up VPC peering between the shadow VPC and another one in your account, and create the resources therein. Click on Account (top right), scroll down to Advanced features, and check VPC peering:

8

Lightsail now available worldwide

With 10 global regions and 29 availability zones,
Lightsail is available where your website or app needs to be.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

How to Resize root EBS Volume on AWS Linux Instance [CENTOS 6 AMI]

I have created a new CentOS Linux instance. I had selected 50 GB of root volume during creating of instance but when system comes online it was showing only 8 GB of disk is usable. I tried to resize root disk using resize2fs, I get the following message

1

So, I have following below steps and able to successfully resize volume to its full size selected during instance creation

Step 1. Take Backups

We strongly recommended to take full backup (AMI) of your instance before doing any changes. Also create a snapshot of root disk.

Step 2. Check Current Partitioning

Now check the disk partitioning using following command. You can see that /dev/xvda is 53.7 GB in size.

2

3

Step 3. Increase Size of Volume

Now start with the disk re partitioning using set of following commands. Execute all the commands carefully.

4

Now change the display units to sectors using u switch.

5

Now print the partition table to check for disk details

6

Now delete the first partition using following command.

7

Now create a new partition using following commands. For the first sector enter 2048 (as shows in above command output) and for last second just press enter to select all partition.

8

Print the partition table again. You will see that new partition has occupied all disk space.

9

Now set the bootable flag on partition 1.

10

Write disk partition permanently and exit.

11

Lets reboot your system after making all above changes.

12

Step 4. Verify Upgraded Disk

At this point your root volume has been resized successfully. Just verify your disk has been resizes properly.

13

Cheers$s$s guyZZZZ…. Yuppie…… Done!!!

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

What are Linux containers?

Linux containers, in short, contain applications in a way that keep them isolated from the host system that they run on. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. And they are designed to make it easier to provide a consistent experience as developers and system administrators move code from development environments into production in a fast and replicable way.

In a way, containers behave like a virtual machine. To the outside world, they can look like their own complete system. But unlike a virtual machine, rather than creating a whole virtual operating system, containers don’t need to replicate an entire operating system, only the individual components they need in order to operate. This gives a significant performance boost and reduces the size of the application. They also operate much faster, as unlike traditional virtualization the process is essentially running natively on its host, just with an additional layer of protection around it.

And importantly, many of the technologies powering container technology are open source. This means that they have a wide community of contributors, helping to foster rapid development of a wide ecosystem of related projects fitting the needs of all sorts of different organizations, big and small.

Why is there such interest in Containers?

Undoubtedly, one of the biggest reasons for recent interest in container technology has been the Docker open source project, a command line tool that made creating and working with containers easy for developers and sysadmins alike, similar to the way Vagrant made it easier for developers to explore virtual machines easily.

Docker is a command-line tool for programmatically defining the contents of a Linux container in code, which can then be versioned, reproduced, shared, and modified easily just as if it were the source code to a program.

Containers have also sparked an interest in microservice architecture, a design pattern for developing applications in which complex applications are broken down into smaller, composable pieces which work together. Each component is developed separately, and the application is then simply the sum of its constituent components. Each piece, or service, can live inside of a container, and can be scaled independently of the rest of the application as the need arises.

Why do I orchestrate Containers?

Simply putting your applications into containers probably won’t create a phenomenal shift in the way your organization operates unless you also change how you deploy and manage those containers. One popular system for managing and organizing Linux containers is Kubernetes.

Kubernetes is an open source system for managing clusters of containers. To do this, it provides tools for deploying applications, scaling those application as needed, managing changes to existing containerized applications, and helps you optimize the use of the underlying hardware beneath your containers. It is designed to be extensible, as well as fault-tolerant by allowing application components to restart and move across systems as needed.

IT automation tools like Ansible, and platform as a service projects like OpenShift, can add additional capabilities to make the management of containers easier.

How do I keep Containers secure?

Container add security by isolating applications from other applications on a host operating system, but simply containerizing an application isn’t enough to keep it secure. Dan Walsh, a computer security expert known for his work on SELinux, explains some of the ways that developers are working to make sure Docker and other container tools are making sure containers are secure, as well as some of the security features currently within Docker, and how they function.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Linux Server Maintenance Checklist

Server maintenance needs to be performed regularly in order to ensure that your server will continue to run with minimal problems, while a lot of maintenance tasks are automated within the Linux operating system now there are still things that need to be checked and monitored regularly to ensure that Linux is running optimally. Below are steps that should be taken in order to maintain your servers.

Updates

New package updates have been installed within the last month.
Keeping your server up to date is one of the most important maintenance tasks that needs to be done. Before applying updates to your server, confirm that you have a recent backup or snapshot if working with a virtual machine so that you have the option of reverting back if the updates cause you unexpected problems. If possible you should aim to test updates on a test server first if you are applying them to a production server, this allows you to first confirm that the updates will not break your server and will be compatible with any other packages or software that you may be running.

You can update all packages currently installed on your server by running ‘yum update’ or ‘apt-get upgrade’, depending on your distribution (throughout the rest of this post commands will be aimed towards Red Hat based operating systems). Ideally this should be done at least once per month so that you have the latest security patches, bug fixes, and improved functionality and performance. You can automate the update by making use of crontab to check for and apply updates whenever you like rather than having to do it manually.

Other applications have been updated in the last month.
Other web applications such as WordPress/Drupal/Joomla need to be frequently updated, as these sort of applications act as a gateway to your server by usually being more accessible than direct server access and by allowing public access in from the Internet. Lots of web applications also may have third party plugins installed which can be coded by anyone which potentially have many security vulnerabilities throughout their unaudited code, so it is critical to update these sorts of applications installed on your server very frequently. These content management systems are not managed by ‘yum’ so will not be updated with a ‘yum update’ like the other packages installed. The updates are usually provided directly through the application itself, if you’re unsure contact the application provider for further assistance.

Reboot the server if a kernel update was installed.
If you ran a ‘yum update’ as previously discussed check to see if the kernel was listed as an update. Alternatively you can explicitly update your kernel with ‘yum update kernel’. The Linux kernel is the core of the Linux operating system and is updated regularly to include security patches, bug fixes and added functionality. Once the kernel has been installed you must reboot your server to complete the process. Before you reboot, run the command ‘uname –r’ which will print the current kernel version that you are booted into. After you reboot and the server is running run the ‘uname –r’ command again and confirm that the newer version that was installed with yum was displayed. If the version number does not change you may need to investigate the kernel that is booted in /boot/grub/grub.conf, yum will update this file by default to boot the updated kernel so you shouldn’t have to change anything normally.

It is possible to avoid rebooting your server by using third party tools such as Ksplice from Oracle or KernelCare from CloudLinux, however by default on a standard operating system the reboot will be required to make use of the newer kernel.

Security

Server access reviewed within the last 6 months.
In order to increase security you should review who has access to your server, in an organization you may have staff who have left but still have accounts with access, these should be removed or disabled. There may be accounts that have sudo access that should not, this should also be reviewed often to avoid a possible security breach as granting root access is very powerful, you can check the /etc/sudoers file to see who has root access and if you need to make changes do so with the ‘visudo’ command. You can view recent logins with the ‘last’ command to see who has been logging into the server.

Firewall rules reviewed in the last 6-12 months.
Firewall rules should also be reviewed from time to time to ensure that you are only allowing required inbound and outbound traffic. Requirements for a server change and as packages are installed and removed the ports that it is listening on may change potentially introducing vulnerabilities so it is important to restrict this traffic correctly, this is typically done in Linux with iptables or perhaps a hardware firewall that sits in front of the server. You can test for ports that are open by using nmap from another server, and view the current rules on the server by running ‘iptables –L –v’.

Confirm that users must change password.
User accounts should be configured to expire after a period of time, common periods are anywhere between 30-90 days. This is important so that the user password is only valid for a set amount of time before the user is forced to change it. This increases security because if an account is compromised it will not always be able to be used as the password will change to something different – access by an attacker will not be maintained through that account.

If your accounts are using an LDAP directory like Active Directory this can be set for the accounts centrally there. Otherwise in Linux you can set this on a per account basis, however this is not as scalable as using a directory because you need to implement the changes on all of your servers individually which will take time. This can be done using the chage command, ‘chage –l username’ will display the current settings on the account, for example:

[root@demo  ~]# chage -l root
Last password change                                    : Apr 07, 2014
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7

All of these parameters can be set for every user on the system.

Monitoring

Monitoring has been checked and confirmed to work correctly.
If your server is used in production you most likely have it monitored for various services, it is important to check and confirm that this monitoring is working as intended and reporting correctly so that you know you will be correctly alerted if there are any issues. It is possible that incorrect firewall rules may disrupt monitoring, or your server may be performing different roles now since the monitoring was initially configured and may now need to be monitored for additional services.

Resource usage has been checked in the last month.
Resource usage is typically checked as a monitoring activity, however it is good practice to observe long term monitoring data in order to get an idea of any resource increases or trends which may indicate that you need to upgrade a component of your server so that it is capable of working under the increased load. This will depend on your monitoring solution, however you should be able to monitor CPU usage, free disk space, free physical memory and other variables for certain thresholds and if these start to trigger more often you will know to investigate further. Typically in Linux you’ll be monitoring with SNMP/NRPE based tools, such as Nagios or Cacti.

Hardware errors have been checked in the last week.
Critical hardware problems will likely show up on your monitoring and be obvious as the server may stop working correctly. You can potentially avoid this scenario by monitoring your system for hardware errors which may give you a heads up that a piece of hardware is having problems and should be replaced in advance before it fails.

You can use mcelog which processes machine checks, namely memory and CPU errors on 64-bit Linux systems, it can be installed with ‘yum install mcelog’ and then started with ‘/etc/init.d/mcelogd start’. By default mcelog will check hourly using crontab and report any problems into /var/log/mcelog so you will want to monitor this file regularly each week or so.

Backups

Backups and restores have been tested and confirmed working.
It is important to backup your servers in case of data loss, it is equally important to actually test that your backups work and that you can successfully complete a restore. Check that your backups are working on a daily or weekly basis, most backup software should be able to notify you if a backup task fails which should be investigated.

It is a good idea to perform a test restore every few months or so to ensure that your backups are working as intended. This may sound time consuming but it’s well worth it, there are countless stories of backups appearing to work until when all the data is lost, only then do people realize that they are not actually able to restore the data from backup.

You can backup locally to the same server which is not recommended, or you can backup to an external location either on your network or out on the Internet, this could be your own server or a cloud storage solution like Amazon’s S3 storage. An external backup is recommended, keep in mind that if you are going to be storing sensitive data at a third party location that you will probably need to investigate encrypting the data so that it is stored safely.

Other general tasks

Unused packages have been removed.
You can save both disk space and reduce your attack surface by removing old and unused packages from your server. Having less packages on your server is a good way to harden and secure it as there is less code available for an attacker to make use of. The command ‘yum list installed’ should display all packages currently installed on your server. ‘yum remove package-name’ will remove the package from your server, just be sure you know what the package is and that you actually want to remove it. Be careful when removing packages with yum, if you remove a package that another package depends on, the dependent package will also be removed which can potentially remove a lot of things at once, after running the command it will confirm the list of packages that will be removed so carefully double check it before proceeding.

File system check performed in the last 180 days.
By default after 180 days or 20 mounts (whichever comes first) your servers will be file system checked with e2fsck on next boot, this should be run occasionally to ensure disk integrity and repair any problems. You can force a disk check by running ‘touch /forcefsck’ and then rebooting the server – the file will be removed on next boot, or with the ‘shutdown –rF now’ command to force a disk check on next boot and perform the reboot now. Aternatively you can use -f instead of –F to skip the disk check, this is known as a fast boot and can also be done with ‘touch /fastboot’. This can be useful for example if you have just performed a kernel update and need to reboot and you want the server back up as soon as possible rather than waiting for the check to complete.

The mount count can be modified using the tune2fs command, the defaults are pretty good however ‘tune2fs –c 50 /dev/sda1’ will increase the mount count to 50 so a file system check will happen after it has been mounted 50 times. On the other hand ‘tune2fs –i 210’ will change the disk so that it is only checked after 210 days rather than 180.

Logs and statistics are being monitored daily or weekly.
If you look through /var/log you will notice that there are a lot of different log files on the server which are continually written to with different information, sometimes useful information but most of the time it is not relevant leading to a large amount of information to go through. Logwatch can be used to monitor your servers logs and email the administrator a summary on a daily or weekly basis – you can control it via crontab. Logwatch can also be used to send a summary of other useful server information such as the disk space in use on all partitions on the server, so it’s a good way to get up to date notifications from your servers. You can install the package with ‘yum install logwatch’.

Regular scans are being run on a weekly/monthly basis.
In order to stay secure it is important to scan your server for malicious content. ClamAV is an open source antivirus engine which detects trojans, malware and viruses and works well with Linux. You can set the cron job to run a weekly scan at 3AM for instance and then email you a report outlining the results. Depending on how much content you have the scan may take a while, it’s recommended that you set an intensive scan to run once per week at a low resource usage time such as on the weekend or over night. Check the crontab and /var/log/cron log file to ensure that the scans are running as intended, you can also configure an email summary to be sent to you so also confirm you are receiving these alerts.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

IT Services and Managed Service Providers (MSPs)

In-house tools can make you more efficient at monitoring, patching, providing remote support, and service delivery. But you also need to ensure regular scheduled maintenance of every client system. That’s where a Managed Service Provider (MSP) comes in.

What’s a Managed Service Provider (MSP)?

A managed service provider (MSP) caters to enterprises, residences, or other service providers. It delivers network, application, system and e-management services across a network, using a “pay as you go” pricing model.

A “pure play” MSP focuses on management services. The MSP market features other players – including application service providers (ASPs), Web hosting companies, and network service providers (NSPs) – who supplement their traditional offerings with management services.

You Probably Need an MSP if….

Your business has a network meeting any of the following criteria:

  • Connects multiple offices, stores, or other sites
  • Is growing beyond the capacity of current access lines
  • Must provide secure connectivity to mobile and remote employees
  • Could benefit from cost savings by integrating voice and data traffic
  • Anticipates more traffic from video and other high-bandwidth applications
  • Is becoming harder to manage and ensure performance and security, especially given limited staff and budget

What Can You Gain?

1. Future proof services, using top-line technology

IT services and equipment from an MSP are constantly upgraded, with no additional cost or financial risk to yourself. There’s little chance that your Managed IT Services will become obsolete.

2. Low capital outlay and predictable monthly costs

Typically, there’s a fixed monthly payment plan. A tight service level agreement (SLA) will ensure no unexpected upgrade charges or changes in standard charges.

3. Flexible services

A pay-as-you-go scheme allows for quick growth when necessary, or cost savings when you need to consolidate.

4. Converged services

A single “converged” connection can provide multiple Managed IT Services, resulting in cost-savings on infrastructure.

5. Resilient and secure infrastructure

A Managed Service Provider’s data centres and managed network infrastructure are designed to run under 24/7/365 management. Typically, their security procedures have to meet government approval.

6. Access to specialist skills

The MSP will have staff on hand capable of addressing specific problems. You may only need this skill once, and save the expense of training your staff for skills they’ll never use.

7. Centralized applications and servers

Access to centralized data centers within the network can also extend access to virtual services, as well as storage and backup infrastructure.

8. Increased Service Levels

SLAs can ensure continuity of service. A managed service company will also offer 24/7/365 support.

9. Disaster recovery and business continuity

MSPs have designed networks and data centers for availability, resilience and redundancy, to maintain business continuity. Your data will be safe and your voice services will continue to be delivered, even if your main office goes down.

10. Energy savings

By running your applications on a virtual platform and centralizing your critical business systems within data centers, you’ll lower your carbon footprint and reduce costs.

Functions of an MSP

Under Managed Services, the IT provider assumes responsibility for a client’s network, and provides regular preventive maintenance of the client’s systems. Technical support is delivered under a service level agreement (SLA) that provides specified rates, and guarantees the consultant a specific minimum income.

The core tools of Managed Services are:

  1. Patch Management
  2. Remote Access provision
  3. Monitoring tools
  4. Some level of Automated Response

Most MSPs also use a professional services automation (PSA) tool such as Autotask or ConnectWise. A PSA provides a Ticketing System, to keep track of service requests and their responses. It may also provide a way to manage Service Agreements, and keep track of technicians’ labor.

In essence, though, it boils down to this: If a system crashes, and the Managed Service Provider is monitoring the network, that MSP has total responsibility for the state of the backup and the health of the server.

As their client (and this should be spelled out, in the SLA), you can hold the MSP totally responsible – up to and including court action, for failing to provide the service they’re contracted to provide.

How to Choose an MSP

Here are five key characteristics to consider, when selecting a managed service provider:

1. Comprehensive Technology Suite

The MSP should have a broad set of solutions available to meet not only your current needs, but to scale and grow as your business develops new products and services.

A well-equipped MSP will offer support for virtual infrastructures, storage, co-location, end user computing, application management capabilities, etc. The MSP should be able to accommodate a range of applications and systems, under a service level agreement starting at the application layer, and extending all the way up the technology stack.

2. Customization and Best Practices

Look for a service provider with the expertise to modify each architecture based on individual business goals.

Their best practices should ensure seamless migration for customers, by taking an existing physical machine infrastructure and visualizing it. Comprehensive support should be available, throughout.

3. Customer-Centric Mindset

The MSP should provide a dedicated account manager who serves as the single point of contact and escalation for the customer. Support should be readily available, along with access to other service channels, as required.

The most effective MSPs will be available to address problems around the clock, and have effective troubleshooting capabilities.

4. Security

For customers working in regulated environments such as healthcare and financial services, security and compliance issues are paramount.The MSP should have robust, tested infrastructure and operational fabric that operates across several geographical zones. This cuts down their susceptibility to natural disasters and service interruptions.

The provider should continuously monitor threats and ensure that each system is designed with redundancy at every level.

5. The Proper Scale

If a small business selects one of the largest service providers, they may not receive a high level of customer-centric, flexible and customized support. Conversely, if a business selects an MSP that’s too small, it may lack the scale and expertise to offer the necessary support.

Having direct access to a senior member of the MSP’s management team by direct email or cell phone can be a good measure of the degree of personalized attention a customer is likely to receive.

Understanding the different types of service providers is the first step in making the right decision for your organization.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

NGINX VERSUS APACHE

Hi all in this post I will be discussing two web server packages, one is Apache which already has shown its ability to do multiple things, in a single package with the help of modules, and millions of websites in the internet is powered by Apache platform. The other is the relatively new web server package called Nginx made by Russian programmer Igor Sysoev.

Many people in the industry might be aware of the thing called “Speed” for which Nginx is famous for. There are some other important difference between Apache and Nginx’s working model, we will be discussing, that differences in detail.

APACHE :

Lets discuss two working models used by the Apache web server. We will get to Nginx later. Most people who are associated with Apache might be knowing about these two models, through which Apache servers its requests. These models are mentioned below.

1.Apache MPM Prefork
2.Apache MPM Worker.

Note: there are many different MPM modules available, for different platforms and functionalities but we will be discussing only the above two here.
Lets have a look at the main difference between MPM Prefork and MPM Worker. MPM stands for “Multi Processing Module“.

MPM Prefork :

Most of the functionality in Apache comes from modules, even this MPM Prefork comes as a module, and can be enabled or disabled. This prefork model of Apache is non-threaded and is a good model, as it makes each and every connection isolated from each other.
So if one connection is is having some issues, the other one is not all effected. By default if no MPM module is specified then Apache uses this MPM Prefork as its MPM module. But this model is very resource intensive.

Why is Prefork model resource intensive?

Because in this model a single parent process sits and creates many child processes which wait for requests and serve as the requests arrive. Which means each and every request is served by a separate process. In other words, we can say its “process per request”. And Apache maintains, several number of idle process before the requests arrive. Due to these idle processes waiting for requests, they can serve fast when requests arrive.

But each and every process will utilize system resources like RAM, and CPU. And equal amount of RAM is utilized for each and every process.

prefork model

If you have got a lot number of requests at one time, then you will have lot number of child processes spawned by Apache, and which will result in heavy resource utilization, as each process will utilize a certain amount of system memory and CPU resources.

MPM Worker :

This model of Apache can serve a large number of requests with less system resources than the prefork model because here a limited number of process will serve, many number of requests.
This is multi threaded architecture of Apache. This model uses thread rather than process to serve requests. Now what is thread??
In Operating System’s thread is a small instance of a process which does some job and exits. Thread is sometimes called a process inside a process.

apache worker

In this model also there is one single parent process, which spawns some child processes. But there is no “process per requests”, but instead “thread per requests”. So the child process will have a certain number of threads inside it. Each child process will have certain “server threads” and certain “idle threads”. Idle threads are waiting for new requests, so there is no time wasted in creating threads when the requests arrive.
There is a directive inside Apache config file /etc/httpd/conf/httpd.conf called “StartServers” which says how many child process will be there when Apache starts.
Child process handles requests with the help of a fixed number of threads inside them which is specified by the argument “ThreadsPerChild” in the config file.

Note: there are some php module issues reported while working with apache MPM worker model.

Now lets discuss Nginx.

NGINX :

Nginx was made, to solve the c10k problem in Apache.

C10k : It is a name given to the issue of optimizing the web server software to handle large number of requsts at one time. In the range of 10000 requests at a time, hence the name
Nginx is known for its speed in serving static pages, much faster than Apache and keeping the machine resources very low.
Fundamentally both Apache and Nginx differs a lot.
Apache works in a multi process/multi threaded architecture, While Nginx is an event driven single threaded architecture. [I will come back to event driven later]. The main difference this event driven architecture makes is that, a very small number of Nginx worker process can serve a very very large number of requests.
Sometimes Nginx is also deployed as a front end server, serving static content requests faster to the clients, and Apache in behind.
Each worker process handles requests with the help of the event driven model. Nginx does this with the help of a special functionality in Linux kernel called as epoll and select poll. Apache when even run by its threaded model utilizes considerably much more system resource than Nginx.

Why does Nginx run more efficiently than Apache?

In Apache when a request is being served, either a thread or a process is created which serves the request. Now if one request needs some data from the database,and files from disk, etc the process waits for that.
So some processes in Apache just sits and wait for certain task to complete (eating system resources).
Suppose a client with a slow internet connection connects to a web server running Apache, the Apache server retrieves the data from the disk, to serve the client. Now even after serving the client that process will wait until a confirmation is received from that client (which will waste that much process resource).
Nginx avoids the idea of child processes. All requests are handled by a single thread. And this single thread will handle everything, with the help of something called as event loop. So the thread pops up whenever a new connection, or some thing is required(not wasting resources.).

Step 1: Gets Request.
Step 2: Request Triggers events inside the process.
Step 3: Process manages all these events and returns the output (and simultaneously handles other events for other requests).

Nginx also supports major functionalities which Apache supports like the following.

  • SSL/TLS
  • Virtual Hosts
  • Reverse Proxy
  • Load Balencer
  • Compression
  • URL rewrite

are some of them…

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!