wp-config.php File – An In-Depth View on How to Configure WordPress

One of the most important files of a WordPress installation is the configuration file. It resides in the root directory and contains constant definitions and PHP instructions that make WordPress work the way you want.
The wp-config.php file stores data like database connection details, table prefix, paths to specific directories and a lot of settings related to specific features we’re going to dive into in this post.

The basic wp-config.php file

When you first install WordPress, you’re asked to input required information like database details and table prefix. Sometimes your host will set-up WordPress for you, and you won’t be required to manually run the set-up. But when you’re manually running the 5-minute install, you will be asked to input some of the most relevant data stored into wp-config.

When you run the set-up, you will be required to input data that will be stored into wp-config.php file
When you run the set-up, you will be required to input data that will be stored into wp-config.php file


Here is a basic wp-config.php file:

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'database_name_here');

/** MySQL database username */
define('DB_USER', 'username_here');

/** MySQL database password */
define('DB_PASSWORD', 'password_here');

/** MySQL hostname */
define('DB_HOST', 'localhost');

/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');

/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

define('AUTH_KEY',	'put your unique phrase here');
define('SECURE_AUTH_KEY',	'put your unique phrase here');
define('LOGGED_IN_KEY',	'put your unique phrase here');
define('NONCE_KEY',	'put your unique phrase here');
define('AUTH_SALT',	'put your unique phrase here');
define('SECURE_AUTH_SALT',	'put your unique phrase here');
define('LOGGED_IN_SALT',	'put your unique phrase here');
define('NONCE_SALT',	'put your unique phrase here');

$table_prefix = 'wp_';

/* That's all, stop editing! Happy blogging. */

Usually, this file is automatically generated when you run the set-up, but occasionally WordPress does not have privileges to write in the installation folder. In this situation, you should create an empty wp-config.php file, copy and paste content from wp-config-sample.php, and set the proper values to all defined constants. When you’re done, upload your file into the root folder and run WordPress.

Note: constant definitions and PHP instructions come in a specific order we should never change. And we should never add contents under the following comment line:

/* That's all, stop editing! Happy blogging. */

First, come the definitions of database constants you should have received from your host:


Following database details, eight security keys will make the site more secure against hackers. When you run the installation WordPress will automatically generate security and salt keys, but you can change them anytime, adding any arbitrary string. For better security, consider to use the online generator.

$table_prefix variable stores the prefix of all WordPress tables. Unfortunately, anyone knows its default value and this could open WordPress database to a vulnerability, which can be easily fixed by setting a custom value for $table_prefix when running the set-up.
To change table prefix in a working website, you should run several queries against the database, then manually edit the wp-config.php file. If you don’t have access to the database or you don’t have the required knowledge to build custom queries, then you can install a plugin like Change Table Prefix that will rename database tables and field names, and update the config file with no risk.

Note: it’s a good practice to back-up WordPress files and database even if you will change the table prefix with a plugin.

So far the analysis has been limited to the basic configuration. But we have at our disposal many constants we can define to enable features, customize and secure the installation.

Over basic configuration: editing the file system

WordPress file system is well known by users and hackers. For this reason, you may consider to change the built-in file structure by moving specific folders in arbitrary locations and setting the corresponding URLs and paths in wp-config file.
First, we can move the content folder by defining two constants. The first one sets the full directory path:

define( 'WP_CONTENT_DIR', dirname(__FILE__) . '/site/wp-content' );

The second sets the new directory URL:

define( 'WP_CONTENT_URL', 'http://example.com/site/wp-content' );

We can move just the plugin folder by defining the following constants:

define( 'WP_PLUGIN_DIR', dirname(__FILE__) . '/wp-content/mydir/plugins' );
define( 'WP_PLUGIN_URL', 'http://example.com/wp-content/mydir/plugins' );

The same way, we can move the uploads folder, by setting the new directory path:

define( 'UPLOADS', 'wp-content/mydir/uploads' );

Note: All paths are relative to ABSPATH, and they should not contain a leading slash.

When done, arrange the folders and reload WordPress.

The image shows the built-in file structure compared to a customized structure
The image shows the built-in file structure compared to a customized structure

It’s not possible to move /wp-content/themes folder from the wp-config file, but we can register a new theme directory in a plugin or a theme’s functions file.

Features for developers: debug mode and saving queries

If you are a developer you can force WordPress to show errors and warnings that will help you in theme and plugin debugging. To enable debug mode you just have to set WP_DEBUG value to true, as shown below:

define( 'WP_DEBUG', true );

WP_DEBUG is set to false by default. If you need to disable debug mode, you can just remove the definition, or set the constant’s value to false.
When you’re working on a living site, you should disable debug mode. Errors and warnings should never be shown to site viewers because it can provide valuable information to hackers. But what if you have to debug anyway?
In such situations, you can force WordPress to keep memory of errors and warning in debug.log file, placed in /wp-content folder. To enable this feature, copy and paste the following code in your wp-config.php file:

define( 'WP_DEBUG', true );
define( 'WP_DEBUG_LOG', true );
define( 'WP_DEBUG_DISPLAY', false );
@ini_set( 'display_errors', 0 );

To make this feature work we first need to enable debug mode. Then, setting WP_DEBUG_LOGto true we force WordPress to store messages into debug.log file, while defining WP_DEBUG_DISPLAY to false we hide them from the screen. Finally, we set to 0 the value of PHP variable display_errors so that error messages won’t be printed to the screen. wp-config is never loaded from the cache. For this reason, it is a good place to override php.ini settings.

Note: This is a great feature you can take advantage of to register messages that WordPress would not print on the screen. As an example, when the publish_post action is triggered WordPress loads a script that saves data, then redirects the user to the post editing page. In this situation you can register messages, but not print them on the screen.

Another debugging constant determines the versions of scripts and styles to be loaded. Set SCRIPT_DEBUG to true if you want to load uncompressed versions:

define( 'SCRIPT_DEBUG', true );

If your theme or plugin shows data retrieved from the database, you may want to store query details for subsequent review. The SAVEQUERIES constant forces WordPress to store query information into $wpdb->queries array. These details would be printed adding the following code to the footer template:

if ( current_user_can( 'administrator' ) ) {
global $wpdb;
echo '<pre>';
print_r( $wpdb->queries );
echo '</pre>';

Content related settings

When your website grows up, you may want to reduce the number of post revisions. By default, WordPress automatically saves revisions each 60 seconds. We can change this value by setting a custom interval in wp-config as follows:

define( 'AUTOSAVE_INTERVAL', 160 );

Of course, you can decrease the auto-save interval, as well.
Each time we save our edits, WordPress adds a row to the posts table, so that we could restore previous revisions of posts and pages. This is a useful functionality that could turn into a problem when our site grows big. Fortunately, we can decrease the maximum number of post revisions to be stored, or disable the functionality at all.
If you’d want to disable post revisions, define the following constant:

define( 'WP_POST_REVISIONS', false );

If you’d want to limit the maximum number of revisions, instead, add the following line:

define( 'WP_POST_REVISIONS', 10 );

By default, WordPress stores trashed posts, pages, attachments and comments for 30 days, then deletes them permanently. We can change this value with the following constant:

define( 'EMPTY_TRASH_DAYS', 10 );

We can even disable trash, setting its value to 0, but consider that WordPress will not allow you to restore contents anymore.

Allowed memory size

Occasionally you may receive a message like the following:

Fatal error: Allowed memory size of xxx bytes exhausted …

The maximum memory size depends on the server configuration. In case you didn’t have access to php.ini file, you can increase memory limit just for WordPress by setting the WP_MEMORY_LIMIT constant in wp-config file. By default, WordPress try to allocate 40Mb to PHP for single sites and 64MB for multisite installations. Of course, if PHP allocated memory is greater than 40Mb (or 64Mb), WordPress will adopt the maximum value.
That being said, you can set a custom value with the following line:

define( 'WP_MEMORY_LIMIT', '128M' );

If needed, you can set a maximum memory limit, as well, with the following statement:

define( 'WP_MAX_MEMORY_LIMIT', '256M' );

Automatic updates

Starting from version 3.7, WordPress supports automatic updates for security releases. This is an important feature that allows site admins to keep their website secure all the time.
You can disable all automatic updates by defining the following constant:


Maybe it’s not a good idea to disable security updates, but it’s your choice.
By default, automatic updates do not work with major releases, but you can enable any core updates defining WP_AUTO_UPDATE_CORE as follows:

# Disables all core updates:
define( 'WP_AUTO_UPDATE_CORE', false );

# Enables all core updates, including minor and major:
define( 'WP_AUTO_UPDATE_CORE', true );

Default value is minor:

define( 'WP_AUTO_UPDATE_CORE', 'minor' );

An additional constant disables auto-updates (and any update or change to any file). If you set DISALLOW_FILE_MODS to true, all file edits will be disabled, even theme and plugin installations and updates. For this reason, its usage is not recommended.

Security settings

We can use wp-config file to increase site security. In addition to changes to the file structure we’ve looked at above, we can lock down some features that could open unnecessary vulnerabilities. First of all, we can disable the file editor provided in the admin panel. The following constant will hide the Appearance Editor screen:

define( 'DISALLOW_FILE_EDIT', true );

Note: consider that some plugins could not work properly if this constant is defined to true.


A security feature is Administration over SSL. If you’ve purchased an SSL certificate, and it’s properly configured, you can force WordPress to transfer data over SSL at any login and admin session. Use the following constant:

define( 'FORCE_SSL_ADMIN', true );

Check the Codex if you need more information about Administration over SSL.

Other two constants allow to block external requests and list admitted hosts.

define( 'WP_HTTP_BLOCK_EXTERNAL', true );
define( 'WP_ACCESSIBLE_HOSTS', 'example.com,*.anotherexample.com' );

In this example, we have first disabled all accesses from external hosts, then listed allowed hosts, separated by commas (wildcards are allowed).

Other advanced settings

WP_CACHE set to true includes wp-content/advanced-cache.php script. This constant has effect only if you install a persistent caching plugin.

CUSTOM_USER_TABLE and CUSTOM_USER_META_TABLE are used to set custom user tables other than default wp_users and wp_usermeta tables. These constants enable a useful feature that allows site users to access several websites with just one account. For this feature to work, all installations should share the same database.

Starting from version 2.9, WordPress support Automatic Database Optimizing. Thanks to this feature, setting WP_ALLOW_REPAIR to true, WordPress will automatically repair a corrupted database.

WordPress creates a new set of images each time you edit an image. If you’d restore the original image, all generated sets will remain on the server. You can overwrite this behavior by setting IMAGE_EDIT_OVERWRITE to true, so that, when you restore the original image, all edits will be deleted from the server.

Lockdown wp-config.php

Now we know why wp-config.php is one of the most important WordPress files. So, why don’t we hide it to hackers? First of all, we can move wp-config one level above WordPress root folder (just one level). However, this technique is a bit controversial, so I would suggest to adopt other solutions to protect the file. If your website is running on Apache Web Server, you can add the following directives to .htaccess file:

<files wp-config.php>
order allow,deny
deny from all

If the website is running on Nginx, you can add the following directive to the configuration file:

location ~* wp-config.php { deny all; }

Note: these instructions should be added only after the set-up is complete.


In this post, I’ve listed a lot of WordPress constant that we can define into wp-config file. Some of these constants are of common usage, and their functions are easy to understand. Other constants enables advanced features that require a deep knowledge of WordPress and site administration.
I’ve listed the most common features, leaving apart some advanced features.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

AWS Announces: Lightsail, a Simple VPS Solution

With the release of AWS Lightsail, Amazon Web Services steps into the market of easy-to-use and quick-to-provision VPS servers. Currently offering both Ubuntu 16.04 and Amazon Linux AMI images, as well as Bitnami-powered application stacks, Lightsail allows users to spin up a server without any of the additional (and sometimes excess) services normally included in AWS.

Instance Tiers and Costs

As of launch, AWS offers five instance plans:

$5/Mo $10/Mo $20/Mo $40/Mo $80/Mo
1 vCPU 1 vCPU 1 vCPU 2 vCPUs 2 vCPUs
1TB Data Transfer 2TB Data Transfer 3TB Data Transfer 4TB Data Transfer 5TB Data Transfer

Currently, the $5 a month plan is offered free for up to one month, or 750 hours. Additional costs include snapshot (backups) storage and data transfer overage charges; Lightsail technical support starts at $30/month.

Getting Started

You can log on to Amazon Lightsail using your regular AWS account at https://amazonlightsail.com. From here, it’s as easy as selecting Create Instance to get started.


On the Create an instance screen, you are prompted to select an Apps + OS image, powered by Bitnami, or a simple Base OS instance. There is no deviation in the deployment process, regardless of if you are launching a base OS image, or one containing an app.

From here, you can add a launch script, if desired. This is generally a series of commands, or Bash script you want to run when the instance is provisioning. For those coming from an AWS space, these are the same as any launch script you may input at the creation of an EC2 instance.

You can also change or add an SSH key pair. Every instance requires an SSH key, and cannot be created without one. Select Change SSH key pair if you wish to create a new key pair; otherwise, keep the default key pair selected. Then select your instance plan.

Additionally, you need to select an Availability Zone. Currently, Lightsail is only available in the N. Virginia region. Meaning, your VPS must be located in a N. Virginia data center. Within this region, there are four zones from which you can select. Choose a zone; if using other AWS services, this may impact which zone you choose.

Finally, name your instance, and select how many instances you wish to create. Create your instance.

The Instance Dashboard

Once you have an instance to work from, select that instance on the main, or Resources page. From here, you can further manage your VPS, starting from being able to Stop or Reboot your server, to more in-depth information regarding metrics and instance history.



Lightsail allows you to connect to your instance from your web browser. Select Connect using SSH to allow a pop-up window act as your terminal. This automatically logs you in as the default user. Lightsail also provides user with an IP address and SSH username. You will, however, need your key pair when logging in from a regular terminal. Your key pair can be downloaded from the Acount page of the Lightsail website. Ensure it has 400 permissions, then SSH in as normal:

ssh -i LightsailDefaultPrivateKey.cer ubuntu@



AWS provides basic metrics for all Lightsail instances. This includes CPU utilization, incoming and outgoing network traffic, and failed status checks. You can view these metrics between a timeline of one hour up to two weeks.



Networking provides you with information regarding your instance’s public and private IP addresses. You can add up to five static (unchanging) IP addresses to each instance, for free.

The networking tab also provides firewall control for your instance. From here, you can add, remove or otherwise alter firewall rules to limit access to your server.



Snapshots provide a way of taking an image of your system in its current state to use as a backup. Snapshots are billed monthly, and based on the amount of GB storage taken up by the snapshots themselves. You can have unlimited snapshots, but be cautious of the cost.



Your instance history contains information on services added or otherwise changed in relation to your created instance, as well as starts, stops and reboots.



Allows users to permanently destroy their image. Note that there is no turning back from deleting an instance. If you will need the instance again, consider only stopping the instance, then restarting it again in the future, when needed.

Additionally, your snapshots for that instance will not be deleted, and you will be responsible for the cost of those snapshots, unless also removed.

Advanced Lightasail & VPC Peering

All of the Lightsail instances within an account run within a “shadow” VPC that is not visible in the AWS Management Console. If the code that you are running on your Lightsail instances needs access to other AWS resources, you can set up VPC peering between the shadow VPC and another one in your account, and create the resources therein. Click on Account (top right), scroll down to Advanced features, and check VPC peering:


Lightsail now available worldwide

With 10 global regions and 29 availability zones,
Lightsail is available where your website or app needs to be.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

How to Resize root EBS Volume on AWS Linux Instance [CENTOS 6 AMI]

I have created a new CentOS Linux instance. I had selected 50 GB of root volume during creating of instance but when system comes online it was showing only 8 GB of disk is usable. I tried to resize root disk using resize2fs, I get the following message


So, I have following below steps and able to successfully resize volume to its full size selected during instance creation

Step 1. Take Backups

We strongly recommended to take full backup (AMI) of your instance before doing any changes. Also create a snapshot of root disk.

Step 2. Check Current Partitioning

Now check the disk partitioning using following command. You can see that /dev/xvda is 53.7 GB in size.



Step 3. Increase Size of Volume

Now start with the disk re partitioning using set of following commands. Execute all the commands carefully.


Now change the display units to sectors using u switch.


Now print the partition table to check for disk details


Now delete the first partition using following command.


Now create a new partition using following commands. For the first sector enter 2048 (as shows in above command output) and for last second just press enter to select all partition.


Print the partition table again. You will see that new partition has occupied all disk space.


Now set the bootable flag on partition 1.


Write disk partition permanently and exit.


Lets reboot your system after making all above changes.


Step 4. Verify Upgraded Disk

At this point your root volume has been resized successfully. Just verify your disk has been resizes properly.


Cheers$s$s guyZZZZ…. Yuppie…… Done!!!

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

What are Linux containers?

Linux containers, in short, contain applications in a way that keep them isolated from the host system that they run on. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. And they are designed to make it easier to provide a consistent experience as developers and system administrators move code from development environments into production in a fast and replicable way.

In a way, containers behave like a virtual machine. To the outside world, they can look like their own complete system. But unlike a virtual machine, rather than creating a whole virtual operating system, containers don’t need to replicate an entire operating system, only the individual components they need in order to operate. This gives a significant performance boost and reduces the size of the application. They also operate much faster, as unlike traditional virtualization the process is essentially running natively on its host, just with an additional layer of protection around it.

And importantly, many of the technologies powering container technology are open source. This means that they have a wide community of contributors, helping to foster rapid development of a wide ecosystem of related projects fitting the needs of all sorts of different organizations, big and small.

Why is there such interest in Containers?

Undoubtedly, one of the biggest reasons for recent interest in container technology has been the Docker open source project, a command line tool that made creating and working with containers easy for developers and sysadmins alike, similar to the way Vagrant made it easier for developers to explore virtual machines easily.

Docker is a command-line tool for programmatically defining the contents of a Linux container in code, which can then be versioned, reproduced, shared, and modified easily just as if it were the source code to a program.

Containers have also sparked an interest in microservice architecture, a design pattern for developing applications in which complex applications are broken down into smaller, composable pieces which work together. Each component is developed separately, and the application is then simply the sum of its constituent components. Each piece, or service, can live inside of a container, and can be scaled independently of the rest of the application as the need arises.

Why do I orchestrate Containers?

Simply putting your applications into containers probably won’t create a phenomenal shift in the way your organization operates unless you also change how you deploy and manage those containers. One popular system for managing and organizing Linux containers is Kubernetes.

Kubernetes is an open source system for managing clusters of containers. To do this, it provides tools for deploying applications, scaling those application as needed, managing changes to existing containerized applications, and helps you optimize the use of the underlying hardware beneath your containers. It is designed to be extensible, as well as fault-tolerant by allowing application components to restart and move across systems as needed.

IT automation tools like Ansible, and platform as a service projects like OpenShift, can add additional capabilities to make the management of containers easier.

How do I keep Containers secure?

Container add security by isolating applications from other applications on a host operating system, but simply containerizing an application isn’t enough to keep it secure. Dan Walsh, a computer security expert known for his work on SELinux, explains some of the ways that developers are working to make sure Docker and other container tools are making sure containers are secure, as well as some of the security features currently within Docker, and how they function.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Linux Server Maintenance Checklist

Server maintenance needs to be performed regularly in order to ensure that your server will continue to run with minimal problems, while a lot of maintenance tasks are automated within the Linux operating system now there are still things that need to be checked and monitored regularly to ensure that Linux is running optimally. Below are steps that should be taken in order to maintain your servers.


New package updates have been installed within the last month.
Keeping your server up to date is one of the most important maintenance tasks that needs to be done. Before applying updates to your server, confirm that you have a recent backup or snapshot if working with a virtual machine so that you have the option of reverting back if the updates cause you unexpected problems. If possible you should aim to test updates on a test server first if you are applying them to a production server, this allows you to first confirm that the updates will not break your server and will be compatible with any other packages or software that you may be running.

You can update all packages currently installed on your server by running ‘yum update’ or ‘apt-get upgrade’, depending on your distribution (throughout the rest of this post commands will be aimed towards Red Hat based operating systems). Ideally this should be done at least once per month so that you have the latest security patches, bug fixes, and improved functionality and performance. You can automate the update by making use of crontab to check for and apply updates whenever you like rather than having to do it manually.

Other applications have been updated in the last month.
Other web applications such as WordPress/Drupal/Joomla need to be frequently updated, as these sort of applications act as a gateway to your server by usually being more accessible than direct server access and by allowing public access in from the Internet. Lots of web applications also may have third party plugins installed which can be coded by anyone which potentially have many security vulnerabilities throughout their unaudited code, so it is critical to update these sorts of applications installed on your server very frequently. These content management systems are not managed by ‘yum’ so will not be updated with a ‘yum update’ like the other packages installed. The updates are usually provided directly through the application itself, if you’re unsure contact the application provider for further assistance.

Reboot the server if a kernel update was installed.
If you ran a ‘yum update’ as previously discussed check to see if the kernel was listed as an update. Alternatively you can explicitly update your kernel with ‘yum update kernel’. The Linux kernel is the core of the Linux operating system and is updated regularly to include security patches, bug fixes and added functionality. Once the kernel has been installed you must reboot your server to complete the process. Before you reboot, run the command ‘uname –r’ which will print the current kernel version that you are booted into. After you reboot and the server is running run the ‘uname –r’ command again and confirm that the newer version that was installed with yum was displayed. If the version number does not change you may need to investigate the kernel that is booted in /boot/grub/grub.conf, yum will update this file by default to boot the updated kernel so you shouldn’t have to change anything normally.

It is possible to avoid rebooting your server by using third party tools such as Ksplice from Oracle or KernelCare from CloudLinux, however by default on a standard operating system the reboot will be required to make use of the newer kernel.


Server access reviewed within the last 6 months.
In order to increase security you should review who has access to your server, in an organization you may have staff who have left but still have accounts with access, these should be removed or disabled. There may be accounts that have sudo access that should not, this should also be reviewed often to avoid a possible security breach as granting root access is very powerful, you can check the /etc/sudoers file to see who has root access and if you need to make changes do so with the ‘visudo’ command. You can view recent logins with the ‘last’ command to see who has been logging into the server.

Firewall rules reviewed in the last 6-12 months.
Firewall rules should also be reviewed from time to time to ensure that you are only allowing required inbound and outbound traffic. Requirements for a server change and as packages are installed and removed the ports that it is listening on may change potentially introducing vulnerabilities so it is important to restrict this traffic correctly, this is typically done in Linux with iptables or perhaps a hardware firewall that sits in front of the server. You can test for ports that are open by using nmap from another server, and view the current rules on the server by running ‘iptables –L –v’.

Confirm that users must change password.
User accounts should be configured to expire after a period of time, common periods are anywhere between 30-90 days. This is important so that the user password is only valid for a set amount of time before the user is forced to change it. This increases security because if an account is compromised it will not always be able to be used as the password will change to something different – access by an attacker will not be maintained through that account.

If your accounts are using an LDAP directory like Active Directory this can be set for the accounts centrally there. Otherwise in Linux you can set this on a per account basis, however this is not as scalable as using a directory because you need to implement the changes on all of your servers individually which will take time. This can be done using the chage command, ‘chage –l username’ will display the current settings on the account, for example:

[root@demo  ~]# chage -l root
Last password change                                    : Apr 07, 2014
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7

All of these parameters can be set for every user on the system.


Monitoring has been checked and confirmed to work correctly.
If your server is used in production you most likely have it monitored for various services, it is important to check and confirm that this monitoring is working as intended and reporting correctly so that you know you will be correctly alerted if there are any issues. It is possible that incorrect firewall rules may disrupt monitoring, or your server may be performing different roles now since the monitoring was initially configured and may now need to be monitored for additional services.

Resource usage has been checked in the last month.
Resource usage is typically checked as a monitoring activity, however it is good practice to observe long term monitoring data in order to get an idea of any resource increases or trends which may indicate that you need to upgrade a component of your server so that it is capable of working under the increased load. This will depend on your monitoring solution, however you should be able to monitor CPU usage, free disk space, free physical memory and other variables for certain thresholds and if these start to trigger more often you will know to investigate further. Typically in Linux you’ll be monitoring with SNMP/NRPE based tools, such as Nagios or Cacti.

Hardware errors have been checked in the last week.
Critical hardware problems will likely show up on your monitoring and be obvious as the server may stop working correctly. You can potentially avoid this scenario by monitoring your system for hardware errors which may give you a heads up that a piece of hardware is having problems and should be replaced in advance before it fails.

You can use mcelog which processes machine checks, namely memory and CPU errors on 64-bit Linux systems, it can be installed with ‘yum install mcelog’ and then started with ‘/etc/init.d/mcelogd start’. By default mcelog will check hourly using crontab and report any problems into /var/log/mcelog so you will want to monitor this file regularly each week or so.


Backups and restores have been tested and confirmed working.
It is important to backup your servers in case of data loss, it is equally important to actually test that your backups work and that you can successfully complete a restore. Check that your backups are working on a daily or weekly basis, most backup software should be able to notify you if a backup task fails which should be investigated.

It is a good idea to perform a test restore every few months or so to ensure that your backups are working as intended. This may sound time consuming but it’s well worth it, there are countless stories of backups appearing to work until when all the data is lost, only then do people realize that they are not actually able to restore the data from backup.

You can backup locally to the same server which is not recommended, or you can backup to an external location either on your network or out on the Internet, this could be your own server or a cloud storage solution like Amazon’s S3 storage. An external backup is recommended, keep in mind that if you are going to be storing sensitive data at a third party location that you will probably need to investigate encrypting the data so that it is stored safely.

Other general tasks

Unused packages have been removed.
You can save both disk space and reduce your attack surface by removing old and unused packages from your server. Having less packages on your server is a good way to harden and secure it as there is less code available for an attacker to make use of. The command ‘yum list installed’ should display all packages currently installed on your server. ‘yum remove package-name’ will remove the package from your server, just be sure you know what the package is and that you actually want to remove it. Be careful when removing packages with yum, if you remove a package that another package depends on, the dependent package will also be removed which can potentially remove a lot of things at once, after running the command it will confirm the list of packages that will be removed so carefully double check it before proceeding.

File system check performed in the last 180 days.
By default after 180 days or 20 mounts (whichever comes first) your servers will be file system checked with e2fsck on next boot, this should be run occasionally to ensure disk integrity and repair any problems. You can force a disk check by running ‘touch /forcefsck’ and then rebooting the server – the file will be removed on next boot, or with the ‘shutdown –rF now’ command to force a disk check on next boot and perform the reboot now. Aternatively you can use -f instead of –F to skip the disk check, this is known as a fast boot and can also be done with ‘touch /fastboot’. This can be useful for example if you have just performed a kernel update and need to reboot and you want the server back up as soon as possible rather than waiting for the check to complete.

The mount count can be modified using the tune2fs command, the defaults are pretty good however ‘tune2fs –c 50 /dev/sda1’ will increase the mount count to 50 so a file system check will happen after it has been mounted 50 times. On the other hand ‘tune2fs –i 210’ will change the disk so that it is only checked after 210 days rather than 180.

Logs and statistics are being monitored daily or weekly.
If you look through /var/log you will notice that there are a lot of different log files on the server which are continually written to with different information, sometimes useful information but most of the time it is not relevant leading to a large amount of information to go through. Logwatch can be used to monitor your servers logs and email the administrator a summary on a daily or weekly basis – you can control it via crontab. Logwatch can also be used to send a summary of other useful server information such as the disk space in use on all partitions on the server, so it’s a good way to get up to date notifications from your servers. You can install the package with ‘yum install logwatch’.

Regular scans are being run on a weekly/monthly basis.
In order to stay secure it is important to scan your server for malicious content. ClamAV is an open source antivirus engine which detects trojans, malware and viruses and works well with Linux. You can set the cron job to run a weekly scan at 3AM for instance and then email you a report outlining the results. Depending on how much content you have the scan may take a while, it’s recommended that you set an intensive scan to run once per week at a low resource usage time such as on the weekend or over night. Check the crontab and /var/log/cron log file to ensure that the scans are running as intended, you can also configure an email summary to be sent to you so also confirm you are receiving these alerts.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

IT Services and Managed Service Providers (MSPs)

In-house tools can make you more efficient at monitoring, patching, providing remote support, and service delivery. But you also need to ensure regular scheduled maintenance of every client system. That’s where a Managed Service Provider (MSP) comes in.

What’s a Managed Service Provider (MSP)?

A managed service provider (MSP) caters to enterprises, residences, or other service providers. It delivers network, application, system and e-management services across a network, using a “pay as you go” pricing model.

A “pure play” MSP focuses on management services. The MSP market features other players – including application service providers (ASPs), Web hosting companies, and network service providers (NSPs) – who supplement their traditional offerings with management services.

You Probably Need an MSP if….

Your business has a network meeting any of the following criteria:

  • Connects multiple offices, stores, or other sites
  • Is growing beyond the capacity of current access lines
  • Must provide secure connectivity to mobile and remote employees
  • Could benefit from cost savings by integrating voice and data traffic
  • Anticipates more traffic from video and other high-bandwidth applications
  • Is becoming harder to manage and ensure performance and security, especially given limited staff and budget

What Can You Gain?

1. Future proof services, using top-line technology

IT services and equipment from an MSP are constantly upgraded, with no additional cost or financial risk to yourself. There’s little chance that your Managed IT Services will become obsolete.

2. Low capital outlay and predictable monthly costs

Typically, there’s a fixed monthly payment plan. A tight service level agreement (SLA) will ensure no unexpected upgrade charges or changes in standard charges.

3. Flexible services

A pay-as-you-go scheme allows for quick growth when necessary, or cost savings when you need to consolidate.

4. Converged services

A single “converged” connection can provide multiple Managed IT Services, resulting in cost-savings on infrastructure.

5. Resilient and secure infrastructure

A Managed Service Provider’s data centres and managed network infrastructure are designed to run under 24/7/365 management. Typically, their security procedures have to meet government approval.

6. Access to specialist skills

The MSP will have staff on hand capable of addressing specific problems. You may only need this skill once, and save the expense of training your staff for skills they’ll never use.

7. Centralized applications and servers

Access to centralized data centers within the network can also extend access to virtual services, as well as storage and backup infrastructure.

8. Increased Service Levels

SLAs can ensure continuity of service. A managed service company will also offer 24/7/365 support.

9. Disaster recovery and business continuity

MSPs have designed networks and data centers for availability, resilience and redundancy, to maintain business continuity. Your data will be safe and your voice services will continue to be delivered, even if your main office goes down.

10. Energy savings

By running your applications on a virtual platform and centralizing your critical business systems within data centers, you’ll lower your carbon footprint and reduce costs.

Functions of an MSP

Under Managed Services, the IT provider assumes responsibility for a client’s network, and provides regular preventive maintenance of the client’s systems. Technical support is delivered under a service level agreement (SLA) that provides specified rates, and guarantees the consultant a specific minimum income.

The core tools of Managed Services are:

  1. Patch Management
  2. Remote Access provision
  3. Monitoring tools
  4. Some level of Automated Response

Most MSPs also use a professional services automation (PSA) tool such as Autotask or ConnectWise. A PSA provides a Ticketing System, to keep track of service requests and their responses. It may also provide a way to manage Service Agreements, and keep track of technicians’ labor.

In essence, though, it boils down to this: If a system crashes, and the Managed Service Provider is monitoring the network, that MSP has total responsibility for the state of the backup and the health of the server.

As their client (and this should be spelled out, in the SLA), you can hold the MSP totally responsible – up to and including court action, for failing to provide the service they’re contracted to provide.

How to Choose an MSP

Here are five key characteristics to consider, when selecting a managed service provider:

1. Comprehensive Technology Suite

The MSP should have a broad set of solutions available to meet not only your current needs, but to scale and grow as your business develops new products and services.

A well-equipped MSP will offer support for virtual infrastructures, storage, co-location, end user computing, application management capabilities, etc. The MSP should be able to accommodate a range of applications and systems, under a service level agreement starting at the application layer, and extending all the way up the technology stack.

2. Customization and Best Practices

Look for a service provider with the expertise to modify each architecture based on individual business goals.

Their best practices should ensure seamless migration for customers, by taking an existing physical machine infrastructure and visualizing it. Comprehensive support should be available, throughout.

3. Customer-Centric Mindset

The MSP should provide a dedicated account manager who serves as the single point of contact and escalation for the customer. Support should be readily available, along with access to other service channels, as required.

The most effective MSPs will be available to address problems around the clock, and have effective troubleshooting capabilities.

4. Security

For customers working in regulated environments such as healthcare and financial services, security and compliance issues are paramount.The MSP should have robust, tested infrastructure and operational fabric that operates across several geographical zones. This cuts down their susceptibility to natural disasters and service interruptions.

The provider should continuously monitor threats and ensure that each system is designed with redundancy at every level.

5. The Proper Scale

If a small business selects one of the largest service providers, they may not receive a high level of customer-centric, flexible and customized support. Conversely, if a business selects an MSP that’s too small, it may lack the scale and expertise to offer the necessary support.

Having direct access to a senior member of the MSP’s management team by direct email or cell phone can be a good measure of the degree of personalized attention a customer is likely to receive.

Understanding the different types of service providers is the first step in making the right decision for your organization.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!


Hi all in this post I will be discussing two web server packages, one is Apache which already has shown its ability to do multiple things, in a single package with the help of modules, and millions of websites in the internet is powered by Apache platform. The other is the relatively new web server package called Nginx made by Russian programmer Igor Sysoev.

Many people in the industry might be aware of the thing called “Speed” for which Nginx is famous for. There are some other important difference between Apache and Nginx’s working model, we will be discussing, that differences in detail.


Lets discuss two working models used by the Apache web server. We will get to Nginx later. Most people who are associated with Apache might be knowing about these two models, through which Apache servers its requests. These models are mentioned below.

1.Apache MPM Prefork
2.Apache MPM Worker.

Note: there are many different MPM modules available, for different platforms and functionalities but we will be discussing only the above two here.
Lets have a look at the main difference between MPM Prefork and MPM Worker. MPM stands for “Multi Processing Module“.

MPM Prefork :

Most of the functionality in Apache comes from modules, even this MPM Prefork comes as a module, and can be enabled or disabled. This prefork model of Apache is non-threaded and is a good model, as it makes each and every connection isolated from each other.
So if one connection is is having some issues, the other one is not all effected. By default if no MPM module is specified then Apache uses this MPM Prefork as its MPM module. But this model is very resource intensive.

Why is Prefork model resource intensive?

Because in this model a single parent process sits and creates many child processes which wait for requests and serve as the requests arrive. Which means each and every request is served by a separate process. In other words, we can say its “process per request”. And Apache maintains, several number of idle process before the requests arrive. Due to these idle processes waiting for requests, they can serve fast when requests arrive.

But each and every process will utilize system resources like RAM, and CPU. And equal amount of RAM is utilized for each and every process.

prefork model

If you have got a lot number of requests at one time, then you will have lot number of child processes spawned by Apache, and which will result in heavy resource utilization, as each process will utilize a certain amount of system memory and CPU resources.

MPM Worker :

This model of Apache can serve a large number of requests with less system resources than the prefork model because here a limited number of process will serve, many number of requests.
This is multi threaded architecture of Apache. This model uses thread rather than process to serve requests. Now what is thread??
In Operating System’s thread is a small instance of a process which does some job and exits. Thread is sometimes called a process inside a process.

apache worker

In this model also there is one single parent process, which spawns some child processes. But there is no “process per requests”, but instead “thread per requests”. So the child process will have a certain number of threads inside it. Each child process will have certain “server threads” and certain “idle threads”. Idle threads are waiting for new requests, so there is no time wasted in creating threads when the requests arrive.
There is a directive inside Apache config file /etc/httpd/conf/httpd.conf called “StartServers” which says how many child process will be there when Apache starts.
Child process handles requests with the help of a fixed number of threads inside them which is specified by the argument “ThreadsPerChild” in the config file.

Note: there are some php module issues reported while working with apache MPM worker model.

Now lets discuss Nginx.


Nginx was made, to solve the c10k problem in Apache.

C10k : It is a name given to the issue of optimizing the web server software to handle large number of requsts at one time. In the range of 10000 requests at a time, hence the name
Nginx is known for its speed in serving static pages, much faster than Apache and keeping the machine resources very low.
Fundamentally both Apache and Nginx differs a lot.
Apache works in a multi process/multi threaded architecture, While Nginx is an event driven single threaded architecture. [I will come back to event driven later]. The main difference this event driven architecture makes is that, a very small number of Nginx worker process can serve a very very large number of requests.
Sometimes Nginx is also deployed as a front end server, serving static content requests faster to the clients, and Apache in behind.
Each worker process handles requests with the help of the event driven model. Nginx does this with the help of a special functionality in Linux kernel called as epoll and select poll. Apache when even run by its threaded model utilizes considerably much more system resource than Nginx.

Why does Nginx run more efficiently than Apache?

In Apache when a request is being served, either a thread or a process is created which serves the request. Now if one request needs some data from the database,and files from disk, etc the process waits for that.
So some processes in Apache just sits and wait for certain task to complete (eating system resources).
Suppose a client with a slow internet connection connects to a web server running Apache, the Apache server retrieves the data from the disk, to serve the client. Now even after serving the client that process will wait until a confirmation is received from that client (which will waste that much process resource).
Nginx avoids the idea of child processes. All requests are handled by a single thread. And this single thread will handle everything, with the help of something called as event loop. So the thread pops up whenever a new connection, or some thing is required(not wasting resources.).

Step 1: Gets Request.
Step 2: Request Triggers events inside the process.
Step 3: Process manages all these events and returns the output (and simultaneously handles other events for other requests).

Nginx also supports major functionalities which Apache supports like the following.

  • Virtual Hosts
  • Reverse Proxy
  • Load Balencer
  • Compression
  • URL rewrite

are some of them…

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Amazon Web Services vs. Microsoft Azure vs. Google Compute Platform

The rivalry is warming up in the cloud space as vendors are offering innovative features and frequently reducing prices. In this blog, we will try to highlight the competition among the three titans of the cloud: Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft’s Azure. Which of these 3 will thrive and win the battle, time will tell.  We have IBM Softlayer and Alibaba’s AliCloud joining the bandwagon.

Although AWS (Amazon Web Services) has a noteworthy head start, Microsoft and Google are not out of the race. Today, Google is developing 12 new cloud data centers over the next 18 months. Both of these cloud vendors have the money, power, marketing bling and technology to draw enterprise and individual customers.

For helping in the enhanced decision making, this article provides a brief breakdown of these three market giants. The article will also try to explain the advantages realized by commissioning a multi-cloud strategy.

IAAS Quadrant_2016

Source: Gartner (August 2016)

Amazon Web Services

AWS has well organized and distributed data centers commissioned across the globe. Availability Zones are placed at quite a distance from each other to avoid any impact of a failure of an Availability Zone on the other.


Microsoft has been quickly building more and more data centers all over the world to catch up with Amazon’s vast geographical presence. From six regions in 2011, they currently have 22 regions, each of which contains one or more data centers, with five additional regions, planned to open in 2016. While Amazon was the first to open a region in China, Microsoft preceded to open the India region at the end of 2015.


Google has the least impression out of the three cloud providers. Google makes up for its geographical limitations with the help of its worldwide network infrastructure, providing low-latency and high-speed connectivity within its data centers, both at a regional and interregional level.


Amazon’s Elastic Compute Cloud (EC2) offers core compute service, enabling its users to form virtual machines with the help of pre-configured or custom based AMIs. You can choose the power, size, number of VMs, memory capacity and select from diverse availability zones from which to launch. It also provides auto-scaling and ELB (load balancing). ELB allocates charges through instances for improved performance, and auto-scaling enables its users to spontaneously and automatically scale available EC2 (Elastic Compute Cloud) volume high or low.

Lately in 2012, Google launched its cloud computing service known as GCE (Google Compute Engine). GCE allows its users to start VMs, much like AWS, into availability groups and regions. Though, Google Compute Engine was not accessible for everyone till 2013. Subsequently, Google added improvements, such as comprehensive support for Operating Systems, load balancing, quicker persistent disks, live migration of virtual machines and instances with more cores.

Similarly in the year 2012, Microsoft launched its cloud compute services, but the same was not normally accessible till May 2013.  Its users select a Virtual Hard Disk (VHD), which is similar to Amazon’s AMI, for VM creation. A Virtual Hard Disk could also be predefined by third parties, by Microsoft or even by the user. With every virtual machine, you are required to specify the amount of memory and number of cores.


Storage is one of the primary elements of IT. The article discusses storage service assistances from these three large cloud providers on the two primary storage types: Block storage and Object storage.


Amazon offers its block storage service which is known as EBS (Elastic Block Storage), and it can support three different types of persistent disks viz. SSD, Magnetic, and SSD with provisioned Input/Output Operations Per Second (IOPS). The volume sizes range from a maximum of 1TB for magnetic disks, to 16TB for SSD.

Amazon’s world-leading object storage service known as S3 (Simple Storage Service), has four different SLAs viz. standard, reduced redundancy, regular – infrequent access, and Glacier. All the data is deposited in a single availability zone unless it is simulated manually over regions or availability zones.


Microsoft’s refers its storage services as Blobs. Disks and Page Blobs are its block storage service. It can be sourced as Premium or standard, with volumes sizes of 1TB. Block Blobs is its object storage service. Alike Amazon, it also offers four different SLAs viz. LRS (Locally redundant storage) where terminated data copies are kept inside the same data center; ZRS (zone redundant storage), where copies of redundant data are maintained in diverse data centers in the same region; and GRS (geographically redundant storage) which executes LRS (Locally redundant storage) on two detached data centers, for the maximum level of availability and durability.


In Google cloud computing space, storage is structured a bit contrarily as compared with its other two competitors. Block storage does not have a particular category but has an add-on to instances within Google Cloud Engine (GCE). Google provides two choices: magnetic or SSD volumes; though the IOPS tally is static. The ephemeral disk is completely configurable and is a chunk of the storage offering. Object storage known as Google Storage is divided into three modules viz. Standard, Durable Reduced Availability for less or non-critical data and Nearline, for archives.


Amazon’s VPCs (Virtual Private Clouds) and Azure’s VNET (Virtual Network) enables their users to cluster virtual machines into remote networks in the cloud. Using VNETs and VPCs, there users can outline a network topology, create route tables, subnets, network gateways and private IP address ranges. There’s nothing much to choose between them over this as both have ways to extend it to your on-premise data center into the public cloud. Whereas, every GCE instance has a single network, which outlines the gateway address and address range for all instances linked to it. You can apply firewall rules to an instance, and it can accept a public IP address.

Billing Structure

Amazon Web Services 

AWS categorizes resources under accounts. Each account comprises a single billing unit within which cloud resources are provisioned. Establishments with numerous AWS accounts, though, would wish to obtain one single combined bill instead of several separate bills. AWS permits this by generating a single consolidated billing. In AWS one of the accounts is selected as unified account and other accounts are connected to it, henceforth linked accounts. The bills are then combined to contain billing for all of the consolidated account and linked accounts, together is referred to as consolidated billing account family.


Microsoft engages a tiered approach to accounts management. The subscription is the lowermost in the ladder, and the individual one truly consumes and provisions resources. An account manages several subscriptions. It might sound like AWS account structure, but Microsoft’s Azure accounts are management units, and they do not use resources by themselves. For establishments without MS Enterprise Agreements, it is where the grading ends. Those with Enterprise Agreements may register their Enterprise Agreements in Azure, and can manage accounts under them, with department administrative and discretionary cost center hierarchies.


Google uses a flat pyramid structure for its billing. The resources are clustered under groups known as Projects. There is no entity higher than projects; nevertheless, several projects could be gathered under a consolidated billing account. This billing statement is similar to Azure’s accounts in the sense as these billing statements are not a consuming entity and also cannot provision services.


Cloud service vendors are providing different pricing and discounts models for their cloud services. Maximum of all such complex pricing and discounts models circle compute services, whereas bulk discounts are typically used with all remaining services. It is primarily due to two reasons. Firstly, the vendors are placed in a very competitive market, and they would want to lock-in their users for a long-term commitment. Secondly, it includes an interest to make the most of the use of their infrastructure, where for each available VM hour in represents a real loss.

Amazon Web Services

AWS has the most diversified and complex pricing models for its Elastic Compute Cloud (EC2) services:

On-demand: clienteles pay for what they use without paying any upfront cost.

Reserved Instances: customers reserve instances for one or three years with an upfront cost based on the use. Payment options include:

  • All-upfront: Here the customer pays for the total commitment upfront and receives the uppermost discount rate
  • Partial-upfront: Here the customer pays 50-70 percent of the commitment upfront, and the remaining is paid in monthly installments. Here the client receives a somewhat lower discount as compared to all upfront.
  • No-upfront: Here the customer pays nothing upfront, and the sum is paid in monthly installments over the term of the reservation. Considerably lower discount is received by the client under this payment option. 


Microsoft bills its clienteles by rounding up the utilized number of minutes on demand. Azure also provides short-term obligations with discounts. Discounts are offered only for bulk financial commitments, through pre-paid subscriptions, which provides 5 percent discount on the bill, or through Microsoft’s Enterprise Agreements, where higher discounts may be applied to an upfront financial obligation by the client.


GCP bills for instances by rounding up the number utilized minutes, with 10 minutes as a minimum base. It recently declared new sustained-use pricing for computing services offering more flexible and a simpler approach. The sustained-use pricing will automatically discount the on-demand baseline hourly rate as a particular instance is used for a larger percentage of the month.

The Bottom Line

The public cloud war slogs on.  With cloud computing in an early maturing stage, it is tough to foresee exactly how things might change in future. However, it is possible to comprehend that prices might continue to drop, and attractive and innovative features may continue to appear. Cloud computing is here to stay and with the growing maturity of private and public cloud platforms with IaaS massive adoption, enterprises now understand that depending on a single cloud vendor is not a long-term option. Issues such as vendor lock-in, higher availability and leveraging the competitive pricing may push enterprises to look for an optimal mix of clouds for their requirement, rather than a sole provider.

OK, folks.. !!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Server Monitoring Best Practices

As a business, you may be running many on-site or Web-based applications and services. Security, data handling, transaction support, load balancing, or the management of large distributed systems. The deployment of these will depend on the condition of your servers. So it’s vital for you to continuously monitor their health and performance.

Here are some guidelines designed to help you get to grips with server monitoring and the implications that it carries.

Understand Server Monitoring Basics

The basic elements of “Server monitoring” are events, thresholds, notifications, and health.

1. Events

Events are triggered on a system when a condition set by a given program occurs. An example would be when a service starts, or fails to start.

2. Thresholds

A threshold is the point on a scale that must be reached, to trigger a response to an event. The response might be an alert, a notification, or a script being run.

Thresholds can be set by an application, or a user.

3. Notifications

Notifications are the methods of informing an IT administrator that something (event, or response) has occurred.

Notifications can take many forms, such as:

  • Alerts in an application
  • E-mail messages
  • Instant Messenger messages
  • Dialog boxes on an IT administrator’s screen
  • Pager text messages
  • Taskbar pop-ups

4. Health

Health describes the set of related measurements defining the state of a variable being monitored.

For instance, the overall health of a file server might be defined by read/write disk access, CPU usage, network performance, and disk fragmentation.

Set Clear Objectives

Server Monitoring Best PracticesDecide what it is you need to monitor. Identify those events most relevant to detecting potential issues that could adversely affect operations or security.

A checklist might include:

  1. Uptime and performance statistics of your web servers
  2. Web applications supported by your web servers
  3. Performance and user experience of your web pages, as supported by your web server
  4. End-user connections to the server
  5. Measurements of load, traffic, and utilisation
  6. A log of HTTP and HTTPS sessions and transactions
  7. The condition of your server hardware
  8. Virtual Machines (VMs) and host machines running the web server

Fit Solid Foundations

It’s safe to say that most IT administrators appreciate useful data, clearly presented enabling them to view lots of information in a legible area. This means that you should take steps to ensure that your monitoring output is easy to read and well presented.

A high-level “dashboard” can serve as a starting point. This should have controls for drilling down into more detail. Navigation around the monitoring tool and access to troubleshooting tools should be as transparent as possible.

It’s also necessary to:

• Identify the top variables to monitor, and set these as default values. Prioritise them in the user interface (UI).

• Provide preconfigured monitoring views that match situations encountered on a day-to-day basis.

• Have a UI that also allows for easy customization.

• Users/IT managers should be able to choose what they want to monitor at any given time. Or be able to adjust the placement of their tools. And they should be able to decide the format they want to view the data in.

• The UI text should be consistent, clear, concise, and professional. From the outset, it should state clearly what is being monitored – and what isn’t.

Build, to Scale

Organizations of different sizes naturally have different monitoring needs. Small Organization IT administrators often look to fix problems after they’ve been identified. Monitoring is generally part of the troubleshooting process. Monitoring applications should intelligently identify problems, and notify the users via e-mail and other means. Keep the monitoring UI simple.

Medium Organization IT administrators monitor to isolate big and obvious problems. A monitoring system should provide an overview of the system, and explanations to help with the troubleshooting process. Preconfigured views, and the automation of common tasks performed on receiving negative monitoring information (e.g., ping, trace route.) will speed response. Again, keep the monitoring UI simple.

Large Organization/Enterprise IT administrators require more detailed and specific information. Users may be dedicated exclusively to monitoring, and will appreciate dense data, with mechanisms for collaborating. Long-term ease of use will take precedence over ease of learning.

Set Up Red Flags

You should provide a set of “normal” or “recommended” values, as a baseline. This will give context to the information being monitored. The system may give the range of normal values itself, or provide tools for users to calculate their own.

Within the application, make sure that data representing normal performance can be captured. This can be used later, as a baseline for troubleshooting. In any case, users should be able to tell at a glance when a value is out of range, and is then a possible cause for concern. Your monitoring software can assist in this, by setting a standard alert scale, across the application.

In Western cultures, common colors for system alerts are:

  • Red = Severe or Critical
  • Yellow = Warning or Informational
  • Green = Good or OK

For accessibility, colors should be combined with icons, for users who are sight-impaired. Words that can be dictated by a screen reader are also appropriate. Limit the use of custom icons in alerts though as users may resent having to learn too many new ones. There may also be conflicts with icons in other applications but saying that, common icons, that are recognizable, are fine, as there’s nothing new to learn.

Explain the Language

Don’t assume that your users will understand all the information your monitoring software provides. Help them interpret the data, by providing explanations, in the user interface.

  • Use roll-overs to display specific data points, such as a critical spike in a chart
  • Explain onscreen, how the monitoring view is filtered. For example, some variables or events might be hidden (but not necessarily trouble-free). The filter mechanism, an explanation of the filter, and the data itself should be positioned close together
  • Give easy access to any variables that are excluded in a view
  • State when the last sample of data was captured
  • Reference the data sources
  • There should be links to table, column, and row headings, with pop up explanations of the variables, abbreviations, and acronyms
  • Provide links beside the tables themselves, with pop up explanations of the entire table

Let Them Know

Alerts should be sent out, to indicate there is a problem with the system. Notifications should be informative enough to give IT administrators a starting point to address the problem. Information which helps the user take action should be displayed near the monitoring information. Probable causes and possible solutions should be prominently displayed.

Likewise, the tools needed for solving common problems should be easily accessible at the notification point.

You should log 24 to 48 hours of data. That way, when a problem arises, users will have enough information available to troubleshoot. Note that some applications need longer periods of monitoring, and some shorter. The log length will be determined by the scope of your day-to-day operations.

Provide multiple channels for notification (email, Instant Messages, pager text, etc.)

Users should be able (and encouraged) to update, document, and share the information needed to start troubleshooting.

Keep Them Informed

Users often need to use monitoring data for further analysis, or for reports. The monitoring application itself should assist, with built-in reporting tools. Performance statistics and an overall summary should be generated at least once a week. Analysis of critical or noteworthy events should be available, on a daily basis.

Allow users to capture and save monitoring data – e.g., the “normal” performance figures used as a baseline for troubleshooting. Users should be able to easily specify what they want recorded (variables, format, duration, etc.). They should also be allowed to log the information they’re monitoring.

There should be a central repository, for all logs from different areas of monitoring. A single UI can then be used, to analyze the data. Export tools (to formats such as .xls, .html, .txt, .csv) should be provided. This will help to facilitate collaboration in reporting and troubleshooting.

Take Appropriate Measures

Different graph types should be appropriate to the type of information you are analyzing.

Line graphs are good for displaying one or more variables on a scale, such as time. Ranges, medians, and means can all be shown simultaneously.

Table format makes it easy for users to see precise numbers. Table cells can also contain graphical elements associated with numbers, or symbols to indicate state. The most important information should appear first, or highlighted so that it can be taken in at glance.

Histograms or bar graphs allow values at a single point in time to be compared easily. Ranges, medians, and means can all be displayed simultaneously.

Some recommendations:

  • When using a line graph, show as few variables as possible. Five is a safe maximum. This makes the graph easier to read
  • Avoid using stacked bar graphs. It’s better to use a histogram, and put the values in clusters along the same baseline. Alternatively, break them up into separate graphs
  • When using a graph to show percentage data, always use a pie chart
  • Consider providing a details pane; clicking a graph will display details about the graph in the pane
  • Avoid trying to convey too many messages in one graph
  • Never use a graph to display a single data point (a single number)
  • Avoid the use of 3D in your charts; it can be distracting
  • Allow users to easily flip between different views of the same data

Push the Relevant Facts

Displaying a lot of stuff onscreen makes it harder for administrators to spot the information that is of most value – like critical error messages.

Draw attention to what needs attention, most:

  • by placing important items prominently
  • by putting more important information before the less important
  • by using visual signposts, such as text or an icon, to indicate important information

Preconfigured monitoring views will reduce the emphasis on users configuring the system. Allow users to customise the information and highlight what they think is important, so it can be elevated in the UI. Group similar events – and consider having a global overview of the system, visible at all times.

Hide the Redundant

If it hasn’t gone critical, or isn’t affecting anything, they don’t need to see it. At least, not immediately. If a failure reoccurs, don’t keep showing the same event, over and over. Try to group similar events into one.

Allow your users to tag certain events as ones they don’t want to view. Let them set thresholds that match their own monitoring criteria. This allows them to create environment-specific standards, and reduces false alarms. Use filters and views, to give users granular control of what they are monitoring.

Provide the ability to zoom in for more detailed information, or zoom out for aggregated data. Allow users to hide unimportant events, but still have them accessible.

Be Prepared, for the Worst

As well as probable causes, the application should suggest possible solutions, for any problems that occur. Administrators will likely have preferred methods of troubleshooting. But, in diagnostic sciences, it helps to get a second opinion. It’s essential to identify events most indicative of potential operational or security issues. Then, automate the creation of alerts on those events, to notify the appropriate personnel.

Being prepared also means that all data should be backed up and stored off the premises as well as on the network. This protects against the obvious such as hardware failure or malware attacks, but also against complete disaster such as a fire at the premises.

And the Best, that Can Happen

With proper monitoring measures in place, you greatly reduce the risk
of losses due to poor server performance. This has a corresponding positive effect on your business – especially online services and transactions.

A well-tuned monitoring system will help facilitate the identification of potential issues, and accelerate the process of fixing unexpected problems before they can affect your users.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Troubleshooting Network & Computer Performance Problems

Problem solving is an inevitable part of any IT technician’s job. From time to time you will encounter a computer or network problem that will, simply put, just leave you stumped. When this happens it can be an extremely nerve-wracking experience, and your first instinct might be to panic.

Don’t do this. You need to believe that you can solve the problem. Undoubtedly you have solved computer performance or network troubles in the past, either on your job or during your training and education. So, if you come across a humdinger that, at first glance at least, you just can’t seem to see a way out of, instead of panicking, try to focus and get into the ‘zone’. Visualize the biggest problem that you’ve managed to solve in the past, and remember the triumph and elation that you felt when you finally overcame it. Tell yourself, “I will beat this computer,” get in the zone, and prepare for battle.

Top 3 Computer & Network Issues You’re Likely To Experience

Network staff and IT security personnel are forever tasked with identifying and solving all manner of difficulties, especially on large networks. Thankfully there are, generally speaking, three main categories that the causes of these issues will fall into. These are: Performance Degradation; Host Identification; and Security.

Let’s take a closer look at each of these categories.

1. Performance Degradation

Performance DegradationPerformance degradation is when speed and data integrity starts to lapse, due, normally, to poor quality transmissions. All networks, no matter their size, are susceptible to performance issues, however the larger the network, the more problems there are likely to be. This is due in the main to the larger distance, and additional equipment, endpoints and midpoints.

Furthermore, networks that aren’t properly equipped with an adequate amount of switches, routers, domain controllers etc. will inevitably put the whole system under severe strain, and performance will thereby suffer.

So, having an adequate amount of quality hardware is of course the start of the mission to reduce the risk of any problems that you may encounter. But hardware alone is not enough without proper configuration – so you need to get this right too.

2. Host Identification

Host IdentificationProper configuration is also key to maintaining proper host identification. Computer networking hardware cannot deliver all of the messages to the right places without correct addressing. Manual addressing can often be configured for small networks, but this is somewhat impractical in larger organizations. Domain controllers and DHCP servers and their addressing protocols and software are absolutely essential when creating and maintaining a large, scalable network.

3. Security

Network SecurityHost identification and performance issues will not make any difference to a network that finds itself breached by hackers. And so, security is also of utmost importance.

Network security means preventing unauthorized users from infiltrating a system and stealing sensitive information, maintaining network integrity, and protecting network denial of service attacks. Again, these issues all magnify in line with the size of the network, simply because there are more vulnerable points at which hackers may try to gain access. On top of this, more users mean more passwords, more hardware, and more potential entry points for hackers.

Your defenses against these types of threats will of course be firewalls, proxies, antivirus software, network analysis software, stringent password policies, and invoking procedures that adequately compartmentalize large networks within in internal boundaries – plenty of areas, then, which may encounter problems.

Troubleshooting the Problems

Ok, so those are the potential difficulties that you are most likely to encounter. Identifying the source of any given problem out of all of these things can of course cause a lot of stress for the practitioner tasked with solving it. So, once you’ve got into the ‘zone’, follow these next 5 simple problem solving strategies and you’ll get to the bottom of the snag in no time. Just believe.

1. Collect Every Piece of Information You Can

Troubleshooting-Gather InformationThis means writing down precisely what is wrong with the computer or network. Just by doing this very simple act starts to trigger your brain into searching for potential solutions. Draw a diagram to sketch out the problem as well. It will help you visualize your task at hand.

Next you need to ask around the office to find out if anything has changed recently. Any new hardware for instance, or any new programs that have been added. If it turns out that there has, you need to try the simple step first of reversing the engines. Revert everything back to how it was before and see if that fixes things.

One of the best troubleshooting skills that you can have is pattern recognition. So, look for patterns in scripts, check for anything out of the ordinary. Is there a spelling mistake somewhere? A file date that is newer than all the rest?

2. Narrow the Search Area

Harware or Software - Narrow Search AreaFirstly you need to figure out if the problem is to do with hardware or software. This will cut your search down by half immediately.

If its software, then try and work out the scale of the problem – which programs are still running and which are not? Try uninstalling and then reinstalling the suspected program.

If it’s hardware, then try swapping the suspect component in question with something similar from a working machine.

3. Develop a Theory

Theory for Possible Causes of ProblemsMake a detailed list of all the possible causes of the problem at hand, and then ask yourself very seriously, using all of your experience, which one is it most likely to be? Trust your instincts, and also turn to the internet. The likelihood is that someone somewhere will have encountered just this very thing before, and may well have posted about it in a blog or forum. If you have an error number, then that will improve your chances of finding a reference. From here, you are in the perfect position to start the process of trial and error.

4. Test Your Theories Methodically

Test Your TheoryThe best troubleshooters test one factor at a time. This can actually be quite a discipline, but it is essential in order to be thorough. Write down every single change that you make, and keep listing potential causes as they occur to you, as well as possible solutions, and keep drawing diagrams to help you visualize the task.

5. Ask For Help!

Ask for HelpSeriously, there is no shame in it, so don’t start getting precious. Try and figure out who the best person would be to solve the problem and get in touch with them. Send out emails,post to forums, call an expert or contact the manufacturer. Do whatever it takes. It’s all part of the troubleshooting process, and you need to know when you require assistance.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!